Bayesian Updating in the EEG

UNIVERSITÄT KONSTANZ
Mathematisch-Naturwissenschaftliche Sektion
Fachbereich Psychologie
Bayesian Updating in the EEG –
Differentiation between Automatic and Controlled Processes
of Human Economic Decision Making
Dissertationsschrift
zur Erlangung des akademischen Grades
Doktor der Naturwissenschaften
(Dr. rer. nat.)
vorgelegt im Februar 2011
von
Sabine Hügelschäfer
Tag der mündlichen Prüfung:
15. Juni 2011
1. Referentin:
PD Dr. Anja Achtziger
2. Referent:
Prof. Dr. Carlos Alós-Ferrer
Konstanzer Online-Publikations-System (KOPS)
URL: http://nbn-resolving.de/urn:nbn:de:bsz:352-145985
Danksagung
An dieser Stelle möchte ich mich bei allen bedanken, die zum Gelingen der vorliegenden
Arbeit beigetragen haben.
Besonders danken möchte ich meiner „Doktormutter“ PD Dr. Anja Achtziger, die es
mir ermöglichte, diese Dissertation über ein spannendes, interdisziplinäres Thema zu
schreiben, mich dabei stets gefördert hat und mir, wo immer möglich, den Rücken
freigehalten hat. Vielen Dank für die großartige, engagierte Betreuung, die wertvollen
fachlichen Anregungen und das in mich gesetzte Vertrauen!
Weiterhin möchte ich mich bei Prof. Dr. Carlos Alós-Ferrer bedanken, von dessen
außerordentlicher Fachkompetenz ich profitieren durfte. Danke für die konstruktiven
Diskussionen über das Thema und das fortwährende Interesse am Fortgang der Arbeit,
sowie für die Unterstützung in Fragen der statistischen Auswertung.
Außerdem bedanke ich mich herzlich bei PD Dr. Marco Steinhauser, der mich jederzeit
mit wertvollem Rat, vor allem in Bezug auf die Erhebung und Auswertung der EEGDaten, unterstützt hat, und von dem ich diesbezüglich sehr viel lernen durfte.
Herrn Dipl.-Psych. Alexander Jaudas möchte ich Dank sagen für all die Hilfe in
technischen Angelegenheiten und insbesondere für die Programmierung der
Computerexperimente.
Weiterer Dank gilt meinen Kollegen des Lehrstuhls für Sozialpsychologie und Motivation
sowie der Forschergruppe „Center for Psychoeconomics“ für all die entgegengebrachte
fachliche und emotionale Unterstützung.
Darüber hinaus bedanke ich mich bei den wissenschaftlichen Hilfskräften für ihren Einsatz
im Rahmen der durchgeführten Studien und bei allen Studentinnen und Studenten, die sich
als Versuchsteilnehmer zur Verfügung gestellt haben und ohne die die vorliegende Arbeit
nicht möglich gewesen wäre.
CONTENTS
Zusammenfassung ………………………………………………………………….
Abstract ……………………………………………………………………………..
1
3
1 Introduction ………………………………………………………..
4
1.1 Dual Processes ……………………………………………………………….
1.1.1 Dual Process Theories of Judgment and Decision Making in Psychology
1.1.2 Dual Process Models in Economics ……………………………………
5
6
10
1.2 Rationality and Deviations ………………………………………………….
1.2.1 Bayesian Updating and Utility Maximization ….………………………
1.2.2 Heuristic or Intuitive Decision Making ………………………………...
1.2.2.1 Conservatism Heuristic …………………………………………
1.2.2.2 Representativeness Heuristic …………………...………………
1.2.2.3 Reinforcement Heuristic ………………………………………..
12
12
14
16
18
20
1.3 Inter- and Intraindividual Heterogeneity regarding Controlled versus
Automatic, or Rational versus Intuitive Decision Making ……………….
24
1.4 Neuronal Correlates of Decision Making ………………………………….
1.4.1 Recent Findings from Neuroeconomics ………………………………..
1.4.2 FRN as an Indicator of Reinforcement Learning ………………………
1.4.3 LRP as an Indicator of Response Activation ………………………...…
1.4.4 N2 as an Indicator of Response Conflict ……………………………….
1.4.5 Interindividual Differences in Decision Making Reflected in Neural
Activity …………………………………………………………………
27
27
28
31
32
1.5 Summary and Theoretical Implications for the Studies ………………….
36
35
2 Studies ……………………………………………………………… 37
2.1 Study 1a: Bayesian Updating and Reinforcement Learning –
Investigating Rational and Boundedly Rational Decision Making by
means of Error Rates and the FRN …………………………………….......
2.1.1 Overview …………………………………………………………….…
2.1.2 Method ………………………………………………………………….
2.1.3 Results ………………………………………………………………….
2.1.4 Discussion ………………………………………………………………
37
37
40
50
66
2.2 Study 1b: Bayesian Updating and Reinforcement Learning – Fostering
Rational Decision Making by Removing Affective Attachment of
Feedback ……………………………………………………………………..
2.2.1 Overview ……………………………………………………………….
2.2.2 Method ………………………………………………………………….
2.2.3 Results ………………………………………………………………….
2.2.4 Discussion ………………………………………………………………
78
78
80
83
96
2.3 Study 2: Conservatism and Representativeness in Bayesian Updating –
Investigating Rational and Boundedly Rational Decision Making by
means of Error Rates, Response Times, and ERPs ………………………
2.3.1 Overview ……………………………………………………………….
2.3.2 Method ………………………………………………………………….
2.3.3 Results ………………………………………………………………….
2.3.4 Discussion ………………………………………………………………
106
106
110
122
137
3 General Discussion ………………………………………………... 153
3.1 The Present Research ………………………………………………………. 153
3.1.1 Overview ………………………………………………………………. 153
3.1.2 Findings …………………………………………………………...…… 154
3.2 Implications, Limitations, and Ideas for Future Research ………………. 157
3.3 Conclusion ………………………………………….……………………….. 162
4 References …………………………………………………………. 163
5 Appendix …………………………………………………………... 203
Appendix A: List of Tables ..………………………………………………….....
Appendix B: List of Figures ……………………………………………………..
Appendix C: Material of Study 1a (in German) …………………………………
Appendix D: Material of Study 1b (in German) …………………………………
Appendix E: Material of Study 2 (in German) …………………………………..
203
205
208
227
244
Zusammenfassung
1
Zusammenfassung
Die Forschung hat gezeigt, dass ökonomische Entscheidungen häufig nicht mit den
normativen Vorgaben von Rationalität übereinstimmen, sondern systematisch von
rationalem Verhalten abweichen (z. B. Starmer, 2000). Zwei-Prozess-Modelle (siehe
Evans, 2008; Sanfey & Chang, 2008; Weber & Johnson, 2009) bieten einen möglichen
Erklärungsansatz hierfür, indem sie zwischen bewussten, kontrollierten Prozessen, die
kognitive Ressourcen benötigen, und schnellen, automatischen Prozessen unterscheiden. In
vielen Fällen können Abweichungen von Rationalität, wie z. B. Verletzungen von
Bayes‟schem Updating (z. B. Fujikawa & Oda, 2005; Ouwersloot, Nijkamp, & Rietveld,
1998; Zizzo, Stolarz-Fantino, Wen, & Fantino, 2000), auf eher automatische, intuitive
Prozesse zurückgeführt werden. Es ist anzunehmen, dass Konflikte zwischen solchen
intuitiven Strategien und Bayes‟schem Updating die Dauer und Ergebnisse von
Entscheidungen sowie die elektrokortikale Aktivität beeinflussen. Außerdem liegt es nahe,
dass Persönlichkeitsmerkmale wie Faith in Intuition (Epstein, Pacini, Denes-Raj, & Heier,
1996) und ereigniskorrelierte Potentiale des EEG einen großen Teil der Varianz im
Entscheidungsverhalten zwischen Personen aufklären können.
Um diesen Fragen nachzugehen, untersuchten die in der vorliegenden Dissertation
vorgestellten Studien das Entscheidungsverhalten und die zugehörige elektrokortikale
Aktivität während Aufgaben, bei denen die Versuchsteilnehmer zur Gewinnmaximierung
a priori Wahrscheinlichkeiten auf Grundlage neuer Informationen aktualisieren mussten,
und bei denen eine intuitive Strategie manchmal mit Bayes‟schen Berechnungen in
Konflikt stand.
In Studie 1a führte die Verstärkungsheuristik (Charness & Levin, 2005) vor allem
bei Personen, die ihrer Intuition vertrauten, zu einer hohen Fehlerrate. Die Anfälligkeit der
Versuchspersonen für diese eher automatische Heuristik war signifikant positiv mit deren
FRN-Amplitude assoziiert. Wenn die Heuristik nicht verfügbar war (Studie 1b), waren die
Versuchsteilnehmer eher in der Lage, den kontrollierten Prozess von Bayes‟schem
Updating anzuwenden. Die FRN-Amplitude reflektierte hier erneut interindividuelle
Unterschiede
im
Verhalten.
In
Studie
2
führten
Konflikte
zwischen
der
Repräsentativitätsheuristik (Grether, 1980, 1992) und Bayes‟schen Berechnungen zu einer
hohen Fehlerrate und langen Reaktionszeiten und waren mit einer stark ausgeprägten N2Amplitude assoziiert. Die individuellen N2-Amplituden der Versuchsteilnehmer
reflektierten deren Fähigkeit, diesen Konflikt wahrzunehmen und entsprechend die
automatische Reaktion zu unterdrücken. Außerdem war Konservatismus (Edwards, 1968)
Zusammenfassung
2
positiv mit ihrer LRP-Amplitude und mit Faith in Intuition korreliert. Insgesamt
unterstützen die Ergebnisse der Studien die Relevanz der Zwei-Prozess-Perspektive im
Kontext ökonomischer Entscheidungen. Außerdem legen die Befunde dringend nahe, dass
interindividuelle Unterschiede hierbei äußerst relevant sind, und stellen unter Beweis, dass
neurowissenschaftliche Methoden ein besseres Verständnis der physiologischen Grundlage
von (begrenzt) rationalem Entscheiden ermöglichen.
Abstract
3
Abstract
Research has shown that economic decision makers often do not behave according
to the prescriptions of rationality, but instead show systematic deviations from rational
behavior (e.g., Starmer, 2000). One approach to explain these deviations is taking a dualprocess perspective (see Evans, 2008; Sanfey & Chang, 2008; Weber & Johnson, 2009) in
which a distinction is made between deliberate, resource-consuming controlled processes
and fast, effortless automatic processes. In many cases, deviations from rationality such as
violations of Bayesian updating (e.g., Fujikawa & Oda, 2005; Ouwersloot, Nijkamp, &
Rietveld, 1998; Zizzo, Stolarz-Fantino, Wen, & Fantino, 2000) may be ascribed to rather
automatic, intuitive processes. Conflicts between such intuitive strategies and Bayesian
updating can be assumed to affect decision outcomes and response times, as well as
electrocortical activity. Moreover, it is suggested that personality characteristics such as
faith in intuition (Epstein, Pacini, Denes-Raj, & Heier, 1996) and event-related potentials
of the EEG can explain much of the variance in decision behavior across participants.
In order to shed light on these issues, the present dissertation investigated decision
behavior and corresponding electrocortical activity in posterior probability tasks in which
participants had to update prior probabilities on the basis of new evidence to maximize
payoff, and in which an intuitive strategy sometimes conflicted with Bayesian calculations.
In Study 1a the reinforcement heuristic (Charness & Levin, 2005) led to a high rate
of decision errors, especially for people who trusted their intuitive feelings. Participants‟
proneness to this rather automatic heuristic was significantly positively associated with the
amplitude of the FRN. When the reinforcement heuristic was not available (Study 1b),
participants were better able to apply the controlled process of Bayesian updating.
Interindividual heterogeneity was again reflected in the amplitude of the FRN. In Study 2,
conflicts between the representativeness heuristic (Grether, 1980, 1992) and Bayesian
calculations led to a high error rate and long response times and were associated with a
strongly pronounced N2 amplitude. Individual N2 amplitudes reflected the extent to which
participants were able to detect this conflict and consequently to suppress the automatic
response. Further, the conservatism heuristic (Edwards, 1968) was positively associated
with participants‟ LRP amplitude and with faith in intuition. On the whole, results of the
studies support the relevance of the dual-process perspective to economic decision making.
Moreover, findings strongly suggest that interindividual differences are highly relevant
within this context, and prove that neuroscientific methods can provide a better
understanding of the physiological basis of (boundely) rational decision making.
Introduction
1
4
Introduction
The studies presented here emerged as part of the interdisciplinary research project
“Center for Psychoeconomics” at the University of Konstanz. This project brings together
researchers from economics, psychology, and pedagogics with the aim of exploring how
conflicting motives and strategies determine human behavior, how human agents can
regulate the resulting decision conflict, and what are the economic and educational
consequences of both the conflict itself and its regulation.
Within the framework of this project, the studies presented in this dissertation
combined methods of experimental psychology, neuroscience, and microeconomics in
order to investigate the processes underlying human decision behavior and their
consequences for economic performance. Specifically, the conflict between heuristic and
rational strategies, or between automatic and controlled process components, was
addressed in the context of posterior probability tasks1 which require the use of the Bayes‟
rule in order to make rational decisions (i.e., maximizing one‟s payoff). Furthermore, it
was investigated whether and how certain variables such as personality traits and skills
moderate or perturb decision making processes.
According to the recent literature in economics and psychology, researchers are
now holding the view that decision making is frequently influenced by competing decision
rules. On the one hand, rationality often dictates a specific behavioral pattern for a certain
task. On the other hand, low-level cognitive processes such as heuristics (in the
psychological sense) or behavioral rules (in the economics and game theory sense) also
play a role and may conflict with the prescriptions of rationality. Therefore, in certain
situations human behavior can be understood as the result of a competition or interaction
between (at least) two different processes: one which corresponds to a rational strategy and
other(s) which correspond(s) to “boundedly rational” behavioral rules which exhibit rather
automatic features.
The present dissertation aims at disentangling automatic from controlled processes
of decision making in the context of posterior probability tasks. Psychology and
neuroscience have developed methods to differentiate between the two kinds of processes.
The present project will use some of these methods (e.g., recording error rates, response
times, and electrocortical activity; questionnaires for identifying groups of people with
1
In these tasks a prior probability is induced, further information is presented, and participants are then
required to make decisions which depend on the posterior probability, or to directly report this probability, or
both.
Introduction
5
individual characteristics related to decision behavior) in order to explore the interaction of
automatic and controlled processes in economic decision making situations.
A brief review of background information, which is necessary to understand these
phenomena, will be given in the introduction section. First, the dual-process account will
be described which constitutes the basis of the present research, and a short review of dualprocess models in psychology and economics will be presented. Then a discussion will be
given of how a “rational” decision maker is assumed to maximize expected payoff by
using the Bayes‟ rule and how this process is often perturbed by systematic biases.
Heuristic or intuitive decision making is introduced and three simple heuristic decision
strategies on which the present research focuses will be detailed. Then, inter- and
intraindividual differences concerning rational decision making will be briefly outlined.
Finally, it will be illustrated how neuroscientific methods can shed light on human decision
making. Three specific neuronal indicators of decision processes will be described that are
of particular interest for the present investigation.
1.1
Dual Processes
The basic research paradigm on which the following studies are based is the idea of
dual processes in human thinking and decision making (e.g., Bargh, 1989; for recent
reviews, see Evans, 2008, or Weber & Johnson, 2009). Dual-process theories which build
upon insights from cognitive and social psychology come in different flavors, but generally
they all postulate the existence of two broad types of human decision processes. The first
type, commonly known as automatic processes, is considered to be fast, effortless, and
requiring no conscious control. The second type, commonly known as controlled
processes, is assumed to be slower, (at least in part) requiring conscious supervision, and
heavily demanding cognitive resources. Dual process models have been used to investigate
numerous topics in cognitive psychology (e.g., Posner & Snyder, 1975; Schneider &
Shiffrin, 1977; Shiffrin & Schneider, 1977) and social psychology (e.g., Chaiken, 1980;
Chen & Chaiken, 1999; Petty & Cacioppo, 1981, 1986) such as persuasion, activation of
attitudes and stereotypes, problem solving and reasoning, and judgment and decision
making.
Introduction
6
1.1.1 Dual Process Theories of Judgment and Decision Making in Psychology
In the domain of judgment and decision making, various dual-process theories have
been proposed to describe the mechanisms underlying actual human behavior (for an
overview, see Evans, 2008; Sanfey & Chang, 2008; Weber & Johnson, 2009). These
theories share the assumption that decision behavior reflects the interplay of two
fundamentally different types of processes. The first type of process is responsible for very
quick and automatic intuitive judgments without requiring conscious control or effort, for
instance based on well-learned prior associations. Since these processes are highly
specialized for domain-specific operations, they are relatively inflexible but can operate in
parallel. Their activation depends on appropriate internal or external triggering cues, for
example, similarity matching. This class of processes is often called “automatic”
(Schneider & Shiffrin, 1977), “impulsive” (Strack & Deutsch, 2004), “reflexive”
(Lieberman, 2003), or “experiential” (Epstein, 1994); it is also referred to as System I (see
Kahneman, 2003; Sloman, 1996, 2002; Stanovich & West, 2000). One way an
automatic/intuitive system can be conceptualized is in terms of mechanisms that integrate
reinforcement outcomes of actions over multiple experiences (Frank, Cohen, & Sanfey,
2009; see also 1.4.2). The other type of reasoning process can be described as deliberative,
serial, more flexible, (partly) conscious, effortful, relying on the application of explicit
rules, and consuming computational resources. Its activation depends on recognizing the
applicability of an abstract rule and on the availability of cognitive resources and
motivation. These processes are termed “controlled” (Schneider & Shiffrin, 1977),
“reflective” (Strack & Deutsch, 2004; Lieberman, 2003), or “rational” (Epstein, 1994), or
are subsumed under System II (see Kahneman, 2003; Sloman, 1996, 2002; Stanovich &
West, 2000). The existence of these complementary systems is said to be, in part, the
reason for the variance of decision outcomes across situations.
Different dual-process accounts hold different views about how automatic and
controlled processes interact: So-called pre-emptive theories postulate an initial selection
between the two types of processes (e.g., Klaczynski, 2000; Petty & Cacioppo, 1986). In
contrast, most theories assume a parallel-competitive structure where several processes
influence a decision simultaneously, each of them either conflicting or cooperating with
the other ones. One noteworthy example of such an account is Epstein‟s (1994; see also
Epstein & Pacini, 1999) Cognitive-Experiential Self-Theory (CEST) distinguishing
between a rational system (conscious, deliberative, based on language and rational beliefs)
and an experiential system (automatic, unconscious, based on beliefs derived from
Introduction
7
emotional experiences). Another well-known example is Sloman‟s (1996) distinction
between associative, similarity-based reasoning and logical, symbolic, rule-based
reasoning. According to this perspective, conflicts between analytic considerations and
heuristically cued beliefs are noticed by the decision maker (see also Denes-Raj & Epstein,
1994). Reasoning errors occur because the analytic system does not always manage to
override the respective heuristics in case of conflict (for empirical evidence, see, e.g., De
Neys & Glumicic, 2008; De Neys, Vartanian, & Goel, 2008; Moutier & Houdé, 2003). A
different class of dual-process theories postulates a default-interventionist structure where
an automatic process operates by default and a conscious, more controlled process
intervenes at a later stage only if the automatic process does not suffice for successful goal
attainment (e.g., Botvinick, Braver, Barch, Carter, & Cohen, 2001; Devine, 1989; Evans,
2006, 2007b; Fiske & Neuberg, 1990; Frederick, 2005; Glöckner & Betsch, 2008b). A
precondition for the controlled process being able to change the outcome of the decision is
behavioral inhibition of the first impulse created by the automatic process. A theory which
falls into this category of dual-process accounts is the theory of intuitive and reflective
judgment of Kahneman and Frederick (2002, 2005). In this view, reasoning errors occur
because the monitoring by the analytic system is quite unreliable; thus, conflicts between
heuristic and analytic reasoning are typically not detected at all.
Dual-process models also differ in their assumptions regarding the independence of
the two systems or processes. While some (e.g., Brewer, 1988; Fazio, 1990; Kahneman &
Tversky, 1972, 1973; Petty & Cacioppo, 1981) have postulated that they are distinct,
mutually exclusive alternatives that never co-occur, others (e.g., Fiske, Lin, & Neuberg,
1999; Fiske & Neuberg, 1990; Hammond, Hamm, Grassia, & Pearson, 1987) have argued
that they represent two poles of a continuum, and still others (Ferreira, Garcia-Marques,
Sherman, & Sherman, 2006) have stated that they act indepently. Another recent approach
(Glöckner & Betsch, 2008b) suggests that automatic and controlled processes are only
partly distinct, and that deliberation only adds certain processing steps to automaticintuitive processing which constitutes the default process underlying all decision making
(for empirical evidence, see Horstmann, Ahlgrimm, & Glöckner, 2009).
Some evidence from social neuroscience (Lieberman, 2003, 2007; Lieberman,
Jarcho, & Satpute, 2004; Satpute & Lieberman, 2006) pointed to the association of
automatic and controlled processes with specific separate brain systems, namely the socalled X-system and C-system. The X-system (reflexive) is composed of the lateral
temporal cortex, the ventromedial prefrontal cortex, the dorsal anterior cingulate cortex,
Introduction
8
and certain subcortical structures like the amygdala and the basal ganglia, while the Csystem (reflective) consists of the lateral and medial prefrontal cortex, the lateral and
medial parietal cortex, the medial temporal lobe, and the rostral anterior cingulate cortex.
However, the idea of two completely distinct neuronal systems that underlie the two types
of processes seems too simplistic and highly unlikely (Frank et al., 2009; Sanfey & Chang,
2008). Instead it appears more likely that a subset of the identified brain structures is
responsible for the emergence of the process dichotomy in different domains (e.g.,
memory, decision making, and attention). As for decision making, decisions can for
example rely on a controlled process which evaluates if-then scenarios in the prefrontal
cortex, or can be guided by an automatic process that incorporates values acquired through
reinforcement learning in the basal ganglia (e.g., Daw, Niv, & Dayan, 2005; see also
1.4.2). Overall, evidence from neuroscience has clearly demonstrated that the human brain
is not a homogenous processor, but instead constantly integrates various specialized
processes (see Loewenstein, Rick, & Cohen, 2008), which is perfectly consistent with the
dual-process perspective described above.
The dichotomy between automatic and controlled processes of decision making as a
reference point represents a simplification (e.g., Hammond et al., 1987). Modern research
in social cognition (Bargh, 1994; Gollwitzer & Bargh, 2005) indicates that a conception of
automaticity and controlledness as graded, continuous characteristics of a process is more
appropriate than a clear-cut dichotomy (see also Kahneman & Treisman, 1984; Cohen,
Dunbar, & McClelland, 1990). Moreover, the very same behavior can be more automatic
or more under conscious control depending on the situation. In addition, processes (e.g.,
driving) that initially are controlled can be automated (e.g., Bargh, 1994; Bargh &
Williams, 2006). Actually, due to the limitations of controlled processes, the brain
constantly automates diverse processes (Camerer, Loewenstein, & Prelec, 2005). Results
of several lines of research have shown that there is often not a distinct dissociation
between analytic and heuristic reasoning (Ajzen, 1977; Bar-Hillel, 1979, 1980; Tversky &
Kahneman, 1974, 1982). Most decisions and behaviors are the products of interactions
between the two types of processes (Camerer et al., 2005). Nevertheless, the distinction
between automatic and controlled processes as a simplification is instrumental since it
offers a possible explanation for the observed inconsistencies in human decision making.
In psychology, automatic processes have consistently been shown to be responsible
for many behavioral biases and heuristics which, for an economist, produce deviations
from the rational paradigm (e.g., Chase, Hertwig, & Gigerenzer, 1998; Evans, 2002;
Introduction
9
Kahneman, Slovic, & Tversky, 1982; Tversky & Kahneman, 1974). Reliance on automatic
processes can also lead to quite good judgments that approximate rational solutions and are
fast and frugal (e.g., Glöckner, 2008; Glöckner & Betsch, 2008a, 2008b, 2008c; Glöckner
& Dickert, 2008; Hammond et al., 1987), which is not surprising given the fact that
automatic processes are, for example, activated in visual perception (McClelland &
Rumelhart, 1981; Wertheimer, 1938a, 1938b) and social perception (Bruner & Goodman,
1947; Read & Miller, 1998; Read, Vanman, & Miller, 1997) to structure and interpret
information from the environment, and generate information directly utilized in decisions
(Dougherty, Gettys, & Ogden, 1999; Thomas, Dougherty, Sprenger, & Harbison, 2008).
Automatic processes, such as emotions, can play a vital role in determining decisions (e.g.,
Bechara & Damasio, 2005; Bechara, Damasio, Damasio, & Lee, 1999; Damasio, 1994).
Decision makers can use affect as information about the quality of decision options
(Schwarz & Clore, 2003) so that the affective valence attached to an option guides the final
choice (Slovic, Finucane, Peters, & MacGregor, 2002). In certain situations, more
deliberation is even counterproductive and results in defective judgments (Wilson, 2002;
Wilson & Schooler, 1991) so that in particular tasks, disengagement of the explicit system
and relying solely on intuition can enhance decision performance (Frank, O‟Reilly, &
Curran, 2006). Especially for complex and ill-structured problems comprising a large
amount of information, intuition as a holistic processing mode can outperform analytic
reasoning (e.g., Dijksterhuis, 2004; but see Acker, 2008) as long as there is no bias in the
learning environment (Hogarth, 2005). This is due to the fact that the unconscious is better
at calculating weights for a high number of factors compared to the conscious system,
which might be biased toward irrelevant information (Dijksterhuis & Nordgren, 2006).
Moreover, how well a certain decision strategy works seems to depend on personal
characteristics such as experience (Pretz, 2008). Obviously, it depends on the respective
circumstances and characteristics of the task as well as on person factors if automatic
processes result in sound choices or biases (e.g., Hogarth, 2005; Shiv, Loewenstein,
Bechara, Damasio, & Damasio, 2005; for a review of findings, see Plessner & Czenna,
2008).
Introduction
10
1.1.2 Dual Process Models in Economics
Theoretical models in economics have tended to address primarily the outcomes of
decisions with little concern for the underlying decision processes. Furthermore, these
models have often overemphasized the influence of controlled rational processes on human
decision behavior in a quite unrealistic manner (e.g., Loewenstein et al., 2008). The
assumptions of these models are inconsistent with psychological research evidencing that a
major part of human behavior is based on automatic processes (e.g., Bargh, 1994;
Schneider & Shiffrin, 1977), including decision making (e.g., Glöckner & Betsch, 2008b;
Glöckner & Herbold, 2008; Kahneman & Frederick, 2002). It has been repeatedly
demonstrated that automatic processes can even overrule deliberately formed intentions
(e.g., Betsch, Haberstroh, Molter, & Glöckner, 2004; Devine, 1989; Wegner, 1994).
Automatic processes can be observed in a time-window from zero to approximately
600 ms after the presentation of a stimulus (e.g., Devine, 1989; Chwilla, Kolk, & Mulder,
2000). This implies that they are faster than conscious, controlled information processing
that would be required if human behavior actually followed the normative axioms of
inference and choice as is usually postulated in economic theory. Moreover, process
analyses of decisions under risk have directly demonstrated that expected-value choices
rarely result from deliberate calculations of weighted sums (Cokely & Kelley, 2009;
Glöckner & Herbold, 2008). In a similar fashion it has been shown that choices in complex
base-rate tasks are not based solely on deliberate information integration (Glöckner &
Dickert, 2008). Further evidence for the influence of the automatic, intuitive System I on
economic decisions comes, for instance, from studies which have shown that emotions and
corresponding bioregulatory signals play an important role in decision making (e.g.,
Bechara, Damasio, Tranel, & Damasio, 1997; van ‟t Wout, Kahn, Sanfey, & Aleman,
2006; see Bechara, Damasio, Tranel, & Damasio, 2005), or that incidental moods such as
sadness can significantly alter economic decision making (e.g., Fessler, Pillsworth, &
Flamson, 2004; Harlé & Sanfey, 2007; Ketelaar & Au, 2003; Lerner & Keltner, 2001;
Nelissen, Dijker, & deVries, 2007). The fact that automatic processes have long been
neglected by experimental economics can be seen as a handicap for the understanding of
deviations from behavior that is rational in an economic sense.
In economics, rule-of-thumb behavioral learning rules in decision making have
been extensively studied in the bounded rationality literature. Examples range from
imitation (for a review, see Alós-Ferrer & Schlag, 2009) or basic forms of reinforcement
learning to forward-looking optimization, myopic best reply and even more sophisticated
Introduction
11
rules. These phenomena point to a continuum dimension of different rationality levels
analog to a dimension of different levels of automaticity versus controlledness. Many
decision processes that economists would consider as being closer to a rational paradigm
would deserve the broad adjective “controlled” from a psychological perspective.
Conversely, many decision rules which economists would consider as representing a
systematic deviation from rationality can be considered to be more “automatic”. The
aforementioned analogy has been supported empirically by Ferreira and colleagues
(Ferreira et al., 2006) who showed that variables known to influence controlled processes
(e.g., cognitive load) affected rule-based reasoning, but not heuristic reasoning. In contrast,
a variable known to influence automatic processes (priming) affected heuristic reasoning,
but not rule-based reasoning. The analogy between a rationality continuum and an
automaticity continuum extends to the idea of conflicting decision processes in economic
situations. For example, when making intertemporal decisions an immediate reward is
evaluated against a larger, however delayed, reward. This means that an automatic and a
controlled process interact in determining the final decision.
Yet, ideas of dual-process accounts have not been taken into account by economic
theory until very recently. During the last few years, models have been built which explain
examples of bounded rationality through the interaction of different selves, reflecting
different decision processes. In contrast to psychology where the parallel-competitive
approach is more common, most economic models describe a default-interventionist
structure. In Benabou and Tirole (2002, 2004, 2006) the sequential structure of the
interaction among the selves is modeled formally through an extensive form game.
Fudenberg and Levine (2006) model decisions as the conflict between a long-run and a
short-run player. Loewenstein and O‟Donoghue (2004) model behavior as the result of the
interaction between an affective system that is, by default, in control and a deliberative
system that can dominate the affective system and reverse a decision through “willpower”,
i.e., costly cognitive effort. In a similar way, Brocas and Carrillo (2008) assume that
emotional processes are constrained by controlled processes which may be asymmetrically
informed. This is also similar to the model of Benhabib and Bisin (2005) wherein
controlled processes monitor the decisions made by automatic processes and intervene to
constrain these processes if their decisions are sufficiently deviated from optimality.
According to the model by Bernheim and Rangel (2004), the brain operates in a “cold” or a
“hot” mode. The activation of one or the other operational mode stochastically depends on
situational variables which are partly a function of previous behavior.
Introduction
12
Taking a dual-system perspective of decision making can help to explain decision
behavior that is inconsistent with the standard assumptions of rationality in economic
paradigms, such as hyperbolic-like discounting behavior in intertemporal choice tasks
(Ainslie & Haslam, 1992), risk-aversion for gains and risk-seeking for losses in gambling
tasks (Tversky & Kahneman, 1981), or behavior that does not maximize payoff in social
bargaining tasks (see Camerer, 2003; Loewenstein et al., 2008; McClure, Botvinick,
Yeung, Greene, & Cohen, 2007).
1.2
Rationality and Deviations
1.2.1 Bayesian Updating and Utility Maximization
Probability updating can be defined as “the process of incorporating information for
the determination (estimation, calibration) of probabilities or probability distributions”
(Ouwersloot et al., 1998, p. 537). People have so-called priors concerning the likelihood of
uncertain events. These priors are assumptions they hold about probabilities because of
previous knowledge. When additional information is acquired, this further information
should be taken into account to determine a new probability of the event. People should
update their priors to so-called posteriors. Statistically, this process of weighting the base
belief with new evidence is described by the Bayes‟ theorem (Bayes & Price, 1763). By
means of this rule, posterior probabilities can be mathematically derived from prior
probabilities and additional information:
P(A) and P(B) are the prior probabilities for event A, respectively event B. AC is the
complementary event of A (often called "not A"). P(A|B) is the posterior probability of event A
given event B.
Such an updating process with the aim of calculating posterior probabilities by optimal
utilization of available information is especially important for a rational decision maker,
that is, an individual who has a clear sense of his/her preferences, makes consistent choices
over time, and intends to maximize his/her personal utility (Huang, 2005). The
maximization of subjective expected utility is the fundamental concept in economic
Introduction
13
theories of rational behavior (see Kahneman, 2003; Sanfey, 2007). It can be described by
the following equation (see Sanfey, 2007, p. 151):
Subjective expected utility = ∑p(xi)u(xi)
p represents the likelihood of a particular alternative. u represents the subjective value of that
alternative.
It is assumed that people assign a utility for each option and then choose the option with
the highest subjective expected utility, which is the sum of the utilities of all outcomes
multiplied by the probability that these outcomes occur (Bernoulli, 1738; Savage, 1954;
Schunk & Betsch, 2006; Trepel, Fox, & Poldrack, 2005; von Neumann & Morgenstern,
1944). The decision maker is treated as if he/she is equipped with unlimited knowledge,
time, and information-processing capacity.
Since numerous studies have shown that people‟s choices cannot be predicted by
this model (e.g., Kahneman et al., 1982; for a review, see Starmer, 2000), it is now
apparent that the expected utility account can only be regarded as an approximation to
actual human decision making, instead of a satisfying description thereof (Sanfey, 2007).
Moreover, a number of well-documented phenomena lead to systematic violations of the
Bayes‟ rule, e.g., conservatism (Edwards, 1968; Slovic & Lichtenstein, 1971), the
representativeness heuristic (Camerer, 1987; Grether, 1980; Kahneman & Tversky, 1972;
Tversky & Kahneman, 1971), the conjunction fallacy (Zizzo et al., 2000), positive
confirmation bias (Jones & Sugden, 2001), or risk-seeking (Tversky & Kahneman, 1981).
Results from various studies on a number of almost classical problems (cab problem: BarHillel, 1980; Tversky & Kahneman, 1982; light-bulb problem: Lyon & Slovic, 1976;
disease problems: Casscells, Schoenberger, & Graboys, 1978; Eddy, 1982; Hammerton,
1973) have supported the assumption that humans are not Bayesian reasoners. El-Gamal
and Grether (1995) examined data from several experiments in order to clarify which rules
participants actually used when updating probabilities. They found that the Bayes‟ theorem
was the single most used rule (54 % of participants), followed by representativeness (34 %)
and conservatism (12 %) (see also Griffiths & Tenenbaum, 2006, for a finding of close
correspondence between everyday predictions and Bayesian inference). However,
Ouwersloot et al. (1998) examined Bayesian updating in a semi-statistical context and
observed that participants did not apply the Bayes' theorem correctly, but instead
systematically made errors in probability updating. Fujikawa and Oda (2005) also
identified their participants as imperfect Bayesian updaters. Charness and Levin (2009)
Introduction
14
conclude from their own results that few people appear to be able to understand the
necessity of updating their priors about a relevant distribution. Zizzo et al. (2000) also
provide strong evidence that experimental participants violate the Bayes‟ theorem. Their
interpretation is that participants are neither perfectly rational, nor zero rational, but rather
are boundedly rational. This view is also held by Charness and others (Charness & Levin,
2005; Charness, Karni, & Levin, 2007) who find that a substantial number of individuals
violate the Bayes‟ theorem by choosing stochastically dominated alternatives, but that error
rates are significantly reduced when choice is no longer accompanied by affect. This would
be consistent with the assumption that Bayesian updating does play an important role for
human decision making, but that this basic rational process is often perturbed by
systematic biases which lead to the use of other rules, such as heuristics. Overall, the
evidence implies that researchers should not ask whether individuals are Bayesians or not,
but rather under what circumstances they will decide according to Bayesian principles or
will use other strategies (see also Budescu & Yu, 2006).
1.2.2 Heuristic or Intuitive Decision Making
As previously stated, automatic processes have consistently been shown to be
responsible for many behavioral heuristics. Heuristics are simple, experience-based
strategies which have been proposed to explain how people make decisions and solve
problems, especially when facing complex problems or incomplete information (e.g.,
Gigerenzer & Goldstein, 1996; Gigerenzer, Todd, and the ABC Research Group, 1999;
Payne, Bettman, Coupey, & Johnson, 1992; Payne, Bettman, & Johnson, 1993). They are
sensitive to some features of the information environment, but insensitive to others (e.g.,
Kahneman & Tversky, 1973; Tversky & Kahneman, 1974), thereby producing systematic
patterns of calibration and miscalibration across different judgment environments (Griffin
& Tversky, 1992; Massey & Wu, 2005). Considerable empirical evidence has accumulated
to suggest that models of heuristics are able to predict cognitive processes and choices of
decision makers (e.g., Bröder, 2000, 2003; Bröder & Schiffer, 2003; Dhami, 2003; Payne
et al., 1993; Rieskamp & Hoffrage, 1999). Heuristics lead to good results under certain
circumstances (e.g., Gigerenzer & Goldstein, 1996), but are not as rational as the
controlled processes of System II and therefore lead to systematic errors or biases in
certain cases (e.g., Kahneman et al., 1982; Tversky & Kahneman, 1974). Kahneman and
colleagues showed that decision makers often base their judgments on feelings of
Introduction
15
representativeness or availability which happens without conscious awareness (Kahneman
& Frederick, 2002), but rather by means of automatic, intuitive processes.
Different models have been proposed to account for “intuition”, each with a
particular focus. In addition to affect-based models (e.g., Damasio, 1994; Finucane,
Alhakami, Slovic, & Johnson, 2000), there are approaches based mainly on cognitive
evidence accumulation (e.g., Busemeyer & Townsend, 1993) and on sampling (e.g.,
Dougherty et al., 1999; Fiedler & Kareev, 2008), as well as network models (e.g.,
Busemeyer & Johnson, 2004; Glöckner & Betsch, 2008b; Holyoak & Simon, 1999).
Therefore, not surprisingly, there are a lot of different definitions of intuitive decision
making as opposed to rational or deliberate decision making2. While some authors propose
that intuition allows for fast processing of a huge amount of information (e.g., Glöckner,
2008; Glöckner & Betsch, 2008b, 2008c; Hammond et al., 1987), others (e.g., Gigerenzer,
2007) argue in tradition of the bounded rationality approach (Simon, 1955) that intuition
only extracts very few cues from the environment and ignores a good portion of available
information. Glöckner and Witteman (2010a, pp. 5-6) sum up the key points of different
definitions of intuition:
“Intuition is based on automatic processes that rely on knowledge structures that are acquired by
(different kinds of) learning. They operate at least partially without people‟s awareness and result in
feelings, signals, or interpretations.”
Though different definitions assume different foundations of intuition, there seems to be
some consensus that intuition is based on implicit knowledge (Lieberman, 2000; Hogarth,
2001; Plessner, 2006; Reber, 1989), that is, the subjective experience that one option is
better than another without being able to explicitly specify the reasons for this experience.
Glöckner and Witteman (2010a) propose a categorization of different kinds of intuition
according to the format in which information is assumed to be stored in memory (e.g.,
reinforcement learning vs. acquisition of schemas) and according to the retrieval or
information integration processes (e.g., affective arousal vs. comparison with prototypes).
Following this approach, intuition comprises cognitive as well as affective processes. This
is in line with Camerer et al. (2005) who suggest that automatic processes – like controlled
processes – can relate to cognitive and to affective content. One reason why these heuristic
processes often lead to violations of the axioms of utility theory is that they might consider
all salient, available information (encountered in the environment or activated from
2
Glöckner and Witteman (2010b, pp. 3-4) hold that “intuition is used as a common label for a set of
phenomena that are likely to be based on completely different cognitive mechanisms.”
Introduction
16
memory) regardless of the objective relevance of the information (Glöckner & Betsch,
2008b). On the other hand, they might also omit information which is logically relevant
(Evans, 2006).
Summing up, intuitive processing may be considered an opposite of controlled
analytic processing in the context of dual-process theories, or it may be considered an
analog of bounded rationality in decision making research (see Fiedler & Kareev, 2008).
Until today, little is known about the underlying cognitive and affective processes leading
to intuitive judgments, so that intuition still represents the “black box of modern
psychology” (Catty & Halberstadt, 2008, p. 295).
In the present dissertation, the focus is on three intuitive strategies which are
opposed to Bayesian updating: the conservatism heuristic (Edwards, 1968) wherein people
give too little weight to recent experience compared to prior beliefs, the representativeness
heuristic (Kahneman & Tversky, 1972) wherein people judge the probability of a
hypothesis by considering how much the hypothesis resembles available data, and the
reinforcement heuristic (Charness & Levin, 2005) wherein positive feedback increases the
probability of repeating a certain action and negative feedback diminishes this probability.
1.2.2.1 Conservatism Heuristic
Conservatism (Edwards, 1968) is a decision bias according to which people give
too much weight to prior probability estimates (base-rates) when confronted with new
information, or even base their decisions on base-rates only3. New information is
underweighted in the updating process relative to the amount prescribed by normative rules
such as the Bayes‟ rule, or even completely ignored. The intuition that underlies this
heuristic is to stick with one‟s prior beliefs and disregard new evidence. From a
psychological perspective, this is related to an attachment to past practices, beliefs (see
Klayman, 1995), anchors (see Mussweiler & Strack, 1999), or commitments even when
they start to prove erroneous or detrimental. This underestimation of the impact of
evidence has been observed repeatedly (e.g., Dave & Wolfe, 2003; El-Gamal & Grether,
1995; Grether, 1992; Kahneman & Tversky, 1972; Phillips & Edwards, 1966; Sanders,
1968). In typical studies (e.g., Edwards, 1968; Slovic & Lichtenstein, 1971) participants
were presented with samples of data drawn from one of several possible sources with fixed
prior probabilities (base-rates). The participants‟ task was to estimate the (posterior)
3
Therefore, this strategy has also been labeled base-rate only (Gigerenzer & Hoffrage, 1995).
Introduction
17
probability that the sample had been generated by a particular source. It was found that
their estimates stayed too close to the prior probabilities and failed to adjust adequately to
the new information. But conservatism does not seem to be a stable phenomenon. Results
of many studies suggest that the existence and amount of conservatism is highly dependent
on characteristics of the situation (Winkler & Murphy, 1973).
Many explanations have been posited to account for this non-Bayesian behavior.
They include misperception of the diagnostic impact of the evidence (Peterson & Beach,
1967), misaggregation of the components (Edwards, 1968), avoidance of extreme
responses (DuCharme, 1970), implicit rejection of the model‟s assumption of conditional
independence (Navon, 1978; Winkler & Murphy, 1973), effects of random noise in the
response process (Erev, Wallsten, & Budescu, 1994) and fallible retrieval processes
(Dougherty et al., 1999). According to Griffin and Tversky (1992), conservatism arises
when people rely too little on high-weight evidence such as a large sample and instead
focus too much on the low strength of the evidence (salience or extremity, i.e., how well
that evidence matches the hypothesis in question). A very similar model was put forward
by Massey and Wu (2005). Hirshleifer (2001) argued that processing new information and
belief updating is cognitively costly, and that information which is presented in a
cognitively costly form might be undervalued. Related to this, Erev, Shimonowitch,
Schurr, and Hertwig (2008) pointed out that overweighting base-rates might reflect the
decision to predominantly bet on those response categories with high prior probabilities
due to limited resources. Already in 1972, Wallsten has made a convincing argument that it
may be impossible to finally identify one single explanation for conservatism4.
Other studies of probabilistic judgment in the 1970s and 1980s found the opposite
result, namely that people gave too little weight to base-rates, which was labeled base-rate
neglect. This bias was attributed to the confusion of probability with similarity (Tversky &
Kahneman, 1982).
4
In 1983, Fischhoff and Beyth-Marom noted that research on conservatism “was quietly abandoned” (p.
248), partly because of the failure to conclusively identify psychological mechanisms. A careful search in the
literature suggested that despite some more recent attempts to reconcile conservatism with base-rate neglect
(e.g., Erev et al., 2008; Erev et al., 1994; Griffin & Tversky, 1992; Massey & Wu, 2005), no dominant
explanation for conservatism in decision making has been brought forward so far.
Introduction
18
1.2.2.2 Representativeness Heuristic
Kahneman and Tversky (1972; see also Tversky & Kahneman, 1971) define the
representativeness heuristic as follows:
“A person who follows this heuristic evaluates the probability of an uncertain event, or a sample, by
the degree to which it is: (i) similar in essential properties to its parent population; and (ii) reflects
the salient features of the process by which it is generated.” (1972, p. 431)
This means that the likelihood of an instance in reference to a class is estimated by judging
the similarity of the instance to the class. Decision makers following the representativeness
heuristic fall for an illusion of validity caused by a good fit (in terms of representativeness)
between the predicted outcome and the input information. They fail to realize the
implications of sample size and draw the same inferences from small and large samples
(Tversky & Kahneman, 1971).
In Glöckner and Witteman‟s (2010a) intuition model, the representativeness
heuristic corresponds to the category “matching intuition”: Objects or situations are
automatically compared with exemplars, prototypes or schemes that are stored in memory
(e.g., Dougherty et al., 1999; Juslin & Persson, 2002; Kahneman & Frederick, 2002). For
example, according to the MINERVA-DM model (Dougherty et al., 1999; cf. Hintzman,
1988), this comparison of the current target object or situation to all similar examples
results in an “echo” which corresponds to a feeling towards a behavioral option. The
intensity of this “echo” is dependent on the number of traces stored in memory that are
highly similar to the target. In his recognition-primed decision model, Klein (1993) speaks
of similar pattern recognition processes that underlie intuition and that activate action
scripts without awareness.
In support for the representativeness heuristic, Kahneman and Tversky (1973;
Tversky & Kahneman, 1974) demonstrated that likelihood judgments were related to
estimated similarity between target event and prototype instead of Bayesian probabilities;
participants largely ignored prior probabilities or base-rates and committed a conjunction
error when the conjunction of two events was more similar to a stored prototype than one
of its constituents.
Grether (1980) critizised the work of Kahneman and Tversky because he doubted
that their findings could be applied to economic decisions. In order to test if the heuristic
could represent a relevant factor in an economic setting, Grether developed an
experimental design which was reduced to statistical aspects and therefore allowed for the
Introduction
19
context-independent investigation of decision making. In this experiment, participants
were presented with three bingo cages that were labeled Cage X, A, and B, respectively;
each contained six balls. Cage X contained balls that were numbered one through six. Cage
A contained four balls marked with an “N” and two balls marked with a “G”, while Cage B
contained three balls marked with an “N” and three balls marked with a “G”. Participants
observed how six balls were drawn with replacement out of Cage A or Cage B. The cages
were concealed so that participants could not see from which cage the balls were drawn.
However, they knew the process for determining from which cage the balls were to be
drawn: First, a ball was drawn out of Cage X, also concealed to the participants. If one of
the numbers one through k was drawn from Cage X, the following six draws would be
from Cage A, otherwise from Cage B. The value of k was varied between 2 and 4, so that
participants knew that Cage A would be chosen with a prior probability of 1/3, 1/2 or 2/3.
The participants‟ task consisted of indicating which cage they thought the balls were drawn
from. There are several situations in which the drawn sample looks like one of the cages
although there is a higher probability that the balls were drawn from the other cage. An
example for such a situation is a prior probability of 1/3 for Cage A with four “N“ balls
and two “G” balls drawn. The sample resembles Cage A so that a participant who follows
the representativeness heuristic would think the balls were drawn from this cage. However,
when calculating the posterior probability for Cage A by using the Bayes‟ theorem, the
obtained value is only 0.4125. This means it is more probable that the balls were drawn
from Cage B. Grether‟s results showed that overall, participants tended to behave
according to the heuristic and gave too little weight to prior probabilities, although priors
were not completely ignored. This was especially the case for inexperienced and for
financially unmotivated participants. However, it could neither been shown that
performance-related payment led to more frequent using of the Bayes‟ theorem.
In further variations of this experiment (Grether, 1992) it was again found that
financial incentives had no significant influence on the use of the Bayes‟ theorem, but only
reduced the proportion of incoherent and nonsense responses. A reduction of the use of the
representativeness heuristic was achieved by a certain alteration: When sample size and
population proportions were chosen in such a way that representative samples could not
arise, participants even behaved in a conservative manner and placed too much weight on
prior probabilities. Grether concluded that the use of the heuristic or decision rules in
general seems to depend upon various details of the decision problem and environment. A
further study based on the experimental design of Grether (1980, 1992) was conducted by
Introduction
20
Harrison (1994). In contrast to Grether‟s findings, the results of Harrison imply that
experience with the task as well as financial incentives do matter for the use of the Bayes‟
theorem versus the representativeness heuristic.
More recent research in economic decision making has supported the classical
results by Kahneman and Tversky and Grether (e.g., El-Gamal & Grether, 1995; Ganguly,
Kagel, & Moser, 2000), and it has been pointed out that decision making often represents a
process of pattern matching rather than a process of explicitly weighing costs and benefits
(e.g., Leboeuf, 2002; Medin & Bazerman, 1999). Of course it is not wrong to consider
representativeness, but it is a sub-optimal strategy to use it as the only decision criterion
and not to take account of prior probabilities (Grether, 1992).
1.2.2.3 Reinforcement Heuristic
The reinforcement heuristic as a decision rule opposed to Bayesian updating was
first introduced by Charness and Levin (2005). They define reinforcement as the idea that
“one is more likely to pick choices (actions) associated with successful past outcomes than
choices associated with less successful outcomes” (2005, p. 1300). They argue that
winning creates a tendency to choose the same decision again, whereas losing analogously
creates a tendency to switch actions, which results in a simple “win-stay, lose-shift”
principle.
There are several models of reinforcement learning in both economics and
psychology. From a psychological perspective, the reinforcement heuristic roughly
corresponds to Thorndike‟s (1911) Law of Effect. The application of reward/punishment
after a decision behavior increases/reduces the prevalence of the respective behavior, of
which people are not necessarily aware (Skinner, 1938). Research in neuroscience has
identified the mesencephalic dopamine system as the neural substrate for reinforcement
learning (e.g., Schultz, 1998). On the basis of psychophysiological evidence, Holroyd and
Coles (2002) concluded that some aspects of reinforcement learning are associated with
extremely fast and unconscious brain responses. This implies that reinforcement learning
has characteristics of an automatic process. In the model of Glöckner and Witteman
(2010a), the reinforcement heuristic would fall in the category “associative intuition”,
which describes implicitly learned, unconscious, affective reactions towards decision
options that might lead to the activation of the previously successful behavioral option.
This view is also similar to the somatic marker hypothesis (Damasio, 1994) and the affect
Introduction
21
heuristic (Slovic et al., 2002). It is further consistent with research in social psychology
showing that the human brain affectively tags almost all objects and concepts, and that
these affective tags come to mind automatically when those objects and concepts are
encountered (e.g., Bargh, Chaiken, Raymond, & Hymes, 1996; Fazio, Sanbonmatsu,
Powell, & Kardes, 1986; De Houwer, Hermans, & Eelen, 1998). For instance, Bechara et
al. (1997) could show that people reacted affectively towards decision options before they
consciously knew that it would be advantageous not to choose these options.
Basic economic reinforcement models (e.g., Erev & Roth, 1998; Roth & Erev,
1995) assume an initial propensity for a certain choice and utilize a payoff sensitivity
parameter. Camerer and Ho (1999a, 1999b) combine reinforcement and belief learning by
assuming two processes – a rapid emotional process in which a chosen strategy is quickly
reinforced by the resulting gain or loss, and a slower deliberative process in which
counterfactuals are created about the outcomes of other strategies that were not chosen. In
Game Theory, several behavioral rules based on the principle of reinforcement learning
have been studied (see Fudenberg & Levine, 1998 for an overview). Mookherjee and
Sopher (1994), Roth and Erev (1995), Erev and Roth (1998), and Camerer (2003) used
reinforcement learning in order to better explain behavioral data from laboratory
experiments in game-theoretic settings.
Reinforcement learning, the association of actions with consequences and the
flexible modification of acquired reinforcement values, can be regarded as a crucial
process for adaptive decision making (Frank & Claus, 2006). This mechanism is thought to
improve decisions over time, by continually updating reward expectations according to the
outcomes of choices encountered in the environment (Barto & Sutton, 1997). Human
children in particular learn through imitation and reinforcement (Bandura, 1977; Bandura,
Ross, & Ross, 1963; Meltzoff, 2005, 2007; Meltzoff & Moore, 1997), therefore it is not
surprising that these two behavioral rules also frequently influence adult behavior even
when more “rational” mechanisms are available (Sutton & Barto, 1998). However, such
simple rules of thumb often produce suboptimal results. For instance, Betsch, Haberstroh,
Glöckner, Haar, and Fiedler (2001) showed that repeatedly reinforced behavioral options
were maintained even though it was obvious that a different behavior was more
advantageous.
Charness and Levin (2005) examined Bayesian updating in contrast to
reinforcement learning by means of a binary-choice experiment which was a variant of
Grether‟s (1980, 1992) posterior probability task. They created a decision situation where
Introduction
22
rational decisions (based on the Bayes‟ theorem) and decisions based on the reinforcement
heuristic were directly opposed: Participants were presented with two urns, each filled with
6 balls in some combination of black balls and white balls. In each of 60 (condition III: 80)
trials participants had to make two draws with replacement. For every black ball (condition
III: for every ball of the respective winning color) they drew participants received a certain
amount of money. In detail, participants chose an urn from which one ball was randomly
extracted and paid if it was black. Then this ball was replaced into its respective urn and
the process was repeated. The exact distributions of black and white balls in the two urns
was dependent upon the state of the world which was determined (independently each
trial) with a known probability of 50 % per possible state and which remained constant
across the two draws of one trial.
Figure 1 gives an overview of the treatment conditions in the experiment of
Charness and Levin (2005):
Condition
State
Probability
Left Urn
Right Urn
Features
Up
50 %

 60 trials; during the first 20 trials first
Down
50 %

 right urn; both draws paid
Up
50 %

 60 trials; during the first 20 trials first
Down
50 %

 right urn; both draws paid
Up
50 %

 80 trials; first draw always forced from
Down
50 %

 (randomly for black or for white ball)
I
II
III
draw forced alternatingly from left and
draw forced alternatingly from left and
left urn; only second draw paid
Figure 1. Distribution of balls in the urns, by treatment condition and state, in the study of
Charness and Levin (2005).
A person who decides rationally in a Bayesian sense will update the prior of 50 % after
observing the color of the first ball, and will base the second draw on the derived posterior
probability in order to maximize expected payoff. A person who uses the reinforcement
heuristic will follow the principle “win-stay, lose-shift”: He/she will stay with the same urn
if it has yielded a black ball (condition III: a ball of the respective winning color) in the
first draw; otherwise he/she will choose the other urn for the second draw. This is because
drawing a black ball means success and is likely to reinforce the action (drawing from the
respective urn), while drawing a white ball signals failure and is likely to reinforce
switching to the other urn.
Introduction
23
According to the distribution of balls in the urns, after a first draw from the right
urn the two strategies are aligned, namely drawing from the same urn in the event of
success and switching to the left urn in the event of failure. But when the first draw is
made from the left urn, the strategies are directly opposed: The expected payoff is higher
when one switches to the right urn after a successful draw, and stays with the left urn after
an unsuccessful draw. Therefore, a Bayesian will behave differently from a person who
follows the reinforcement heuristic. Moreover, a Bayesian will always start with the right
urn if given the chance to decide because this behavior produces the highest expected
payoff for the two draws5.
The three different conditions were designed to examine the factors which might
influence the participants‟ decisions. Concerning condition II it was assumed that this
configuration reduced the complexity of the task and made it simpler to draw conclusions
about the present state of the world. Moreover, Charness and Levin hypothesized that the
rational Bayesian process would be perturbed by reinforcement effects. Therefore,
affective as well as monetary reinforcement was removed from the first draw in condition
III, in which only winning balls of the second draw were paid and the winning color of
each trial was announced only after the first draw so that the first draw only had
informative value and was not associated with success or failure.
Charness and Levin found that participants made very few mistakes when Bayesian
updating and the reinforcement heuristic were aligned (after first draws from the right urn),
and much more mistakes when the two strategies were opposed (after first draws from the
left urn). While the alterations in condition II had no influence on error rates, those in
condition III significantly reduced the error rate, especially after drawing a successful ball.
Altogether, their results indicate that mistakes were indeed explained by the competition
between the rational strategy and reinforcement learning. Additionally, Charness and Levin
found a negative relationship between the cost of an error6 and its frequency, which
suggests that the importance of the reinforcement heuristic diminishes as incentives
increase. Further, female participants committed a higher number of decision errors
compared to male participants. All in all, there were huge interindividual differences in
error rates.
5
This is the reason why the first draw was forced alternatingly from the left and right urn during the first 20
trials. In this way it was ensured that all participants faced situations where Bayesian updating and
reinforcement learning were opposed, which were the situations of special interest.
6
This cost corresponds to the difference in expected payoffs between the correct decision and the erroneous
one.
Introduction
24
Extending the method used by Charness and Levin (2005), Achtziger and AlósFerrer (2010) attempted to investigate in more detail the processes underlying participants‟
decision behavior. The analysis of response times showed that when Bayesian updating
and reinforcement learning were opposed, the frequency of erroneous quick decisions was
significantly higher than the frequency of erroneous slow decisions. When both processes
were aligned, the opposite was true. These findings are in line with the assumption that
Bayesian updating constitutes a controlled process, while the deviation from this process in
form of the reinforcement heuristic corresponds to a more automatic (and therefore also
quicker) process. Achtziger and Alós-Ferrer (2010) also tested for the effect of different
monetary incentives and found that higher incentives did not increase effort or
performance.
1.3
Inter- and Intraindividual Heterogeneity regarding Controlled
versus Automatic, or Rational versus Intuitive Decision Making
As described above, dual-process theories localize intuition and deliberation as
referring to the class of automatic processes or System I, and to the class of controlled
processes or System II, respectively (Kahneman, 2003; Sloman, 1996, 2002; Stanovich &
West, 2000). Whether a particular decision problem implies an automatic or a controlled
process, or whether a decision is made in an intuitive or deliberate (“rational”) manner,
does not only depend on the problem itself (e.g., see Hammond et al., 1987), but also on
situational factors and person factors. This results in a large heterogeneity within and
among individuals.
Heterogeneity within people results from the assumption that both strategies are
available to some extent in most humans, so that the actual use of either strategy depends
on situational factors. The same decision maker may, for example, work on the problem in
a controlled manner when given unlimited time, and rely on an automatic process if under
a time constraint (e.g., Ariely & Zakay, 2001; Friese, Wänke, & Plessner, 2006; Goodie &
Crooks, 2004; Rieskamp & Hoffrage, 1999; Wilson, Lindsey, & Schooler, 2000). In a
similar way, heuristic or intuitive reasoning will be more probable when people are
depleted of executive resources (e.g., Hofmann, Rauch, & Gawronski, 2007; Masicampo &
Baumeister, 2008; Pocheptsova, Amir, Dhar, & Baumeister, 2009) or under cognitive load
(e.g., Friese, Hofmann, & Wänke, 2008; Gibson, 2008; Gilbert, 2002). Being in a good
mood (e.g., de Vries, Holland, & Witteman, 2008; Ruder & Bless, 2003; see Schwarz,
Introduction
25
2000) also makes people more susceptible for heuristic reasoning. Physiological needs are
assumed to trigger affect, whereas the presence of probabilities triggers the use of
deliberate strategies (Epstein, 2007; Hammond et al., 1987; Weber & Lindemann, 2008).
Furthermore, according to the idea of an effort-accuracy trade-off (e.g., Beach & Mitchell,
1978; Payne et al., 1992; Payne, Bettman, & Johnson, 1988), decisions of high importance
also make the selection of deliberate, analytical decision strategies more probable.
According to such cost-benefit theories of decision strategy choice, the decision maker
balances a strategy‟s accuracy (benefits) against its demand for mental effort (cost of
execution) and selects the strategy that maximizes the net gain of benefits over costs.
Therefore, the more controlled process might dominate the automatic one in certain
individuals if performance-related monetary incentives or social pressure are introduced.
Whether there are interindividual differences in the use of decision making
strategies has been discussed repeatedly (e.g., Langan-Fox & Shirley, 2003; Myers &
McCaulley, 1986; Newell, Weston, & Shanks, 2003). Individual differences have been
found in the tendencies to avoid framing errors (Levin, Gaeth, Schreiber, & Lauriola,
2002; Shiloh, Salton, & Sharabi, 2002), abandon sunk costs (Stanovich, 1999), and apply
decision rules (Bröder, 2000). Dependent upon personal experience and skills, the same
decision process might be more automatic for one decision maker than for another. For
instance, strategies that were reinforced in the past are likely to be selected when the same
or a similar decision problem occurs (Rieskamp & Otto, 2006). Apart from that,
individuals can also develop quite stable preferences for a certain decision strategy. Based
on Epstein‟s (1994; see also Epstein & Pacini, 1999) Cognitive-Experiential Self Theory
(CEST), Epstein and colleagues (Epstein et al., 1996; see also Pacini & Epstein, 1999)
found interindividual differences regarding the engagement and trust in one‟s own intuition
and in the experiential system (faith in intuition) and regarding the engagement in and
enjoyment of cognitive activity (need for cognition, see also Cacioppo & Petty, 1982).
Faith in intuition is characterized by heuristic processing as described by Kahneman et al.
(1982). In a similar way, Betsch (2004, 2007; Betsch & Kunz, 2008) demonstrated that
people differ in their motivation to make decisions in an automatic way, based on implicit
knowledge, feelings, and automatic affect (preference for intuition)7, and their motivation
to make decisions in a controlled, analytic mode, based on explicit cognitions (preference
7
In contrast to Epstein and colleagues (Epstein et al., 1996; Pacini & Epstein, 1999), Betsch (2004) does not
equate intuition with heuristic processing in the sense of using cognitive short-cuts (see also Keller, Bohner,
& Erb, 2000), but understands intuition as a purely affective mode. While faith in intuition and need for
cognition correlate with measures of ability, preference for intuition and preference for deliberation do not
show such relationships, but only correlate with motivational variables.
Introduction
26
for deliberation). Apart from Epstein and colleagues and Betsch and colleagues, several
other researchers have developed inventories with the aim of measuring individual
differences in intuition and/or deliberation, which all capture different aspects of the
concepts (see Betsch & Iannello, 2010).
Several studies have found empirical evidence for interindividual differences in
decision styles which correlate with other dependent variables. For instance, people with a
stable preference for intuition are faster in decision making (Betsch, 2004; Schunk &
Betsch, 2006) and rely stronger on implicit knowledge (Betsch, 2007; Richetin, Perugini,
Adjali, & Hurling, 2007; Woolhouse & Bayne, 2000) and on feelings (Schunk & Betsch,
2006) compared to people who prefer a deliberate mode. Intuitive decision makers are also
more prone to regard their heuristic responses as logical (Epstein et al., 1996). Highly
rational individuals give more normatively correct answers than relatively less rational
individuals (Pacini & Epstein, 1999; Shiloh et al., 2002) and have an increased ability to
adequately handle statistical information (Berger, 2007).
It is important to note that people are basically able to use both intuitive as well as
deliberate decision strategies (see, e.g., Betsch, 2007). Still, most people seem to have a
clear preference for one strategy (Betsch, 2004; Betsch & Kunz, 2008; Epstein et al., 1996;
Pacini & Epstein, 1999) which plays a role when their decision mode is not constrained by
the situation. Dual-process models which assume a parallel-competitive structure (e.g.,
Sloman, 1996, 2002) might explain such interindividual differences by postulating that
more resources are allocated to one of the two processing systems compared to the other
one. Models assuming a default-interventionist structure (e.g, Kahneman & Frederick,
2002, 2005), on the other hand, might explain individual differences by suggesting
different thresholds for the activation of deliberation in addition to intuition.
Interindividual differences in preferences for intuitive or deliberate decision making
could at least partly explain the huge heterogeneity in decision behavior observed in the
studies of Bayesian updating and heuristics (e.g., Charness & Levin, 2005; Grether, 1980,
1992; Kahneman et al., 1982).
Introduction
1.4
27
Neuronal Correlates of Decision Making
1.4.1 Recent Findings from Neuroeconomics
During the last few years, the relatively new field of neuroeconomics (see, e.g.,
Camerer, 2007; Camerer et al., 2005; Loewenstein et al., 2008; Sanfey, 2007; Sanfey,
Loewenstein, McClure, & Cohen, 2006) has begun to investigate brain processes of human
economic decision making by means of neuroscientific methods, predominantly by brain
imaging techniques such as functional magnetic resonance imaging (fMRI). Studies using
fMRI have identified brain regions that are assumed to be involved in decision making
under risk and uncertainty, intertemporal choices, social decision making (e.g., Sanfey,
2007; Loewenstein et al., 2008), and consumer behavior (for a review, see Hubert &
Kenning, 2008). This research has, for example, discovered brain regions in which activity
scales with the magnitude of reward and punishment (e.g., Delgado, Locke, Stenger, &
Fiez, 2003; Elliott, Newman, Longe, & Deakin, 2003), areas associated with the processing
of probabilities (e.g., Berns, Capra, Chappelow, Moore, & Noussair, 2008; Knutson,
Taylor, Kaufman, Peterson, & Glover, 2005), areas in which activation correlates with the
expected value of a decision option (e.g., Breiter, Aharon, Kahneman, Dale, & Shizgal,
2001; Preuschoff, Bossaerts, & Quartz, 2006), neural signals reflecting risk (e.g, Huettel,
2006; Huettel, Song, & McCarthy, 2005) and ambiguity (Hsu, Bhatt, Adolphs, Tranel, &
Camerer, 2005), regions that are activated when human decision makers receive unfair
offers from interaction partners (Sanfey, Rilling, Aronson, Nystrom, & Cohen, 2003), and
brain regions influenced by expert advice in the context of decision making under
uncertainty (Engelmann, Capra, Noussair, & Berns, 2009).
Whereas fMRI offers a high spatial resolution, electroencephalography (EEG) is a
direct measurement of neuronal activity with very high temporal resolution. It records the
electrical activity on the scalp that is produced by excitatory and inhibitory post-synaptic
potentials in the dendritic arborizations summed across large populations of neurons in
cortical tissue. Via EEG recordings, neural processes can be measured in real-time (i.e.,
online). Since many processes of decision making are thought to be automatic (e.g.,
processes corresponding to reinforcement learning, attention allocation, conflict
monitoring, or feedback processing), and therefore extremely fast (in the range of
milliseconds), the use of EEG recordings seems well suited to explore the time course in
which relevant processes take place.
Introduction
28
Event-related potentials (ERPs) are aspects of the brain‟s electric potential that are
time-locked to certain internal or external events and are considered to be manifestations of
brain activities occurring in preparation for or in response to these events (Fabiani,
Gratton, & Coles, 2000). Therefore they can be used as indicators of psychological
processes underlying thinking, feeling, and behavior (e.g., Macar & Vidal, 2004). In an
extensive number of studies, ERPs have also been used to investigate the processes of
human economic decision making, particularly the processing of monetary rewards or
incentives in simple gambling tasks (e.g., Goyer, Woldorff, & Huettel, 2008; Hajcak,
Moser, Holroyd, & Simons, 2006; Kamarajan et al., 2009; Masaki, Takeuchi, Gehring,
Takasawa, & Yamazaki, 2006; Mennes, Wouters, Van den Bergh, Lagae, & Stiers, 2008;
Nieuwenhuis, Yeung, Holroyd, Schurger, & Cohen, 2004; Yu & Zhou, 2009). In these
tasks participants are often required to take probabilities of possible outcomes into account
when trying to make their decisions in accordance with maximizing their own payoffs. The
focus of investigations typically relies on reinforcement learning and feedback processing
(e.g., Frank, Woroch, & Curran, 2005; Hajcak, Moser, Holroyd, & Simons, 2007; Holroyd
& Coles, 2002, 2008; Holroyd, Krigolson, Baker, Lee, & Gibson, 2009; Yeung & Sanfey,
2004) which are, for instance, reflected in the modulation of the feedback-related
negativity.
1.4.2 FRN as an Indicator of Reinforcement Learning
The feedback-related negativity (FRN; sometimes also termed medial frontal
negativity, MFN) is a negative event-related potential that is elicited by negative
performance feedback (Gehring & Willoughby, 2002, 2004; Hajcak, Holroyd, Moser, &
Simons, 2005; Holroyd & Coles, 2002; Miltner, Braun, & Coles, 1997). It occurs about
200-300 ms after feedback presentation; its maximum has been located to medial frontal
scalp locations. The anterior cingulate cortex (ACC) has been identified as the most likely
generator of the FRN by studies which used source-localization (e.g., Bellebaum & Daum,
2008; Gehring & Willoughby, 2002; Luu, Tucker, Derryberry, Reed, & Poulsen, 2003;
Miltner et al., 1997; Ruchsow, Grothe, Spitzer, & Kiefer, 2002), consistent with theoretical
perspectives (Paus, 2001; Bush, Luu, & Posner, 2000) and research studies (Kerns et al.,
2004; MacDonald, Cohen, Stenger, & Carter, 2000) that argue for the significance of the
medial frontal cortex in performance monitoring and regulation of cognitive control. An
FRN can be observed in situations where outcomes are partly unexpected, such as
Introduction
29
gambling tasks that require participants to use positive and negative feedback to evaluate
their response as correct or incorrect, in contrast to situations in which there is a clear
mapping between response and outcome (Heldmann, Rüsseler, & Münte, 2008). A number
of experiments have demonstrated that the amplitude of the FRN is larger following
feedback indicating unfavourable outcomes (e.g., errors or loss of money) than following
positive feedback (e.g., Bellebaum & Daum, 2008; Hewig et al., 2007; Holroyd, Larsen, &
Cohen, 2004; Polezzi, Sartori, Rumiati, Vidotto, & Daum, 2010; Yeung & Sanfey, 2004)
and larger following unexpected losses than following expected losses (e.g., Bellebaum &
Daum, 2008; Hajcak et al., 2007; Hewig et al., 2007; Holroyd, Nieuwenhuis, Yeung, &
Cohen, 2003; Yasuda, Sato, Miyawaki, Kumano, & Kuboki, 2004; but see Cohen, Elger, &
Ranganath, 2007; Hajcak, Holroyd, et al., 2005). While it is virtually beyond controversy
that the FRN reflects reward valence, there is no clear evidence whether the FRN is also
sensitive to violations of reward magnitude expectations. It has often been found to be
insensitive to the magnitude of reward in within-subjects designs (e.g., Hajcak et al., 2006;
Holroyd, Hajcak, & Larsen, 2006; Marco-Pallarés et al., 2008; Sato et al., 2005; Yeung &
Sanfey, 2004), but more recently there have also been opposite findings (e.g., Bellebaum,
Polezzi, & Daum, 2010; Goyer et al., 2008; Marco-Pallarés et al., 2009; Onoda, Abe, &
Yamaguchi, 2010; Wu & Zhou, 2009). The amplitude of the FRN depends not so much on
the absolute or objective value of feedback, but mainly on the relative or subjective value
of feedback in a certain context (Holroyd & Coles, 2008; Holroyd et al., 2004;
Nieuwenhuis et al., 2004; Polezzi et al., 2010; Yeung, Holroyd, & Cohen, 2005).
Holroyd and others (Holroyd & Coles, 2002; Holroyd, Coles, & Nieuwenhuis,
2002; Holroyd et al., 2003; Holroyd, Yeung, Coles, & Cohen, 2005) have argued that the
feedback-related negativity constitutes a neural mechanism of reinforcement learning (this
assumption is called the Reinforcement Learning Theory of the FRN). Animal research
suggests that the basal ganglia and the midbrain dopamine system are involved in
reinforcement learning. The basal ganglia continuously evaluate the outcome of ongoing
behaviors or events against a person‟s expectations. If an outcome is better or worse than
anticipated, this information is conveyed to the ACC via an increase or decrease,
respectively, in phasic activity of midbrain dopaminergic neurons (see, e.g., Berns,
McClure, Pagnoni, & Montague, 2001; Daw & Doya, 2006; Montague, Hyman, & Cohen,
2004; Pagnoni, Zink, Montague, & Berns, 2002; Schultz, 2002, 2004). According to the
Reinforcement Learning Theory, the impact of this dopamine signal determines the
amplitude of the FRN (Holroyd & Coles, 2002): Unexpected positive feedback is followed
Introduction
30
by a small FRN amplitude, whereas unexpected negative feedback is associated with a
large negative amplitude (but see, e.g., Oliveira, McDonald, & Goodman, 2007, for a
different finding). In this way, the FRN is believed to reflect the prediction error of reward
(for recent empirical evidence, see Ichikawa, Siegle, Dombrovski, & Ohira, 2010). These
processes are closely linked to the detection of negative temporal difference errors that
indicate the decline of a goal or reward expectancy according to mathematical formulations
of learning theory (Sutton & Barto, 1981, 1998). The ACC, which is connected with motor
regions (Dum & Strick, 1993, 1996; Diehl et al., 2000; van Hoesen, Morecraft, & Vogt,
1993), then makes use of the neural response for the selection of optimal behavior via the
disinhibition of the apical dendrites of motor neurons. At the same time, the error signal is
used to update the expected reward value of the chosen behavioral option so that it better
reflects the reward that was actually observed. In this way, the basal ganglia dopamine
system learns an implicit reinforcement value for each stimulus depending on how often its
choice is associated with positive versus negative reinforcement (see, e.g., Frank, 2005;
Frank, Seeberger, & O‟Reilly, 2004). The extent to which expectations are updated is
modeled as the product of the prediction error and the learning rate (Sutton & Barto, 1981,
1998). These assumptions fit well with findings implying that the FRN is related to
subsequent behavioral changes. For example, Cohen and Ranganath (2007) demonstrated
that, across participants, larger FRN amplitudes preceded behavioral switches, which is
perfectly consistent with the assumption that prediction errors signal the need to adjust
behavior (Ridderinkhof, Ullsperger, Crone, & Nieuwenhuis, 2004; Ridderinkhof, van den
Wildenberg, Segalowitz, & Carter, 2004). Yasuda et al. (2004) also found significant
positive correlations between FRN amplitude and the rate of response switching in the
subsequent trial (for related findings of behavioral adaptation see also Cavanagh, Frank,
Klein, & Allen, 2010; Hewig et al., 2007; Philiastides, Biele, Vavatzanidis, Kazzer, &
Heekeren, 2010; Sato & Yasuda, 2004; Van der Helden, Boksem, & Blom, 2010; Yeung &
Sanfey, 2004). Then, with repeated experience, each choice converges on a value reflecting
its integrated reinforcement history (Daw et al., 2005). This assumption is consistent with
evidence that the amplitude of the FRN decreases with learning (e.g., Holroyd & Coles,
2002; Nieuwenhuis et al., 2002; Pietschmann, Simon, Endrass, & Kathmann, 2008; Sailer,
Fischmeister, & Bauer, 2010; but see Cohen et al., 2007 and Eppinger, Kray, Mock, &
Mecklinger, 2008 for different findings), possibly due to the decreasing information value
of the feedback stimulus. In the same way, ACC activity is reduced when the information
value of the current outcome decreases since at the same time, the extent to which this
Introduction
31
outcome dictates future actions decreases (Jenkins, Brooks, Nixon, Frackowiak, &
Passingham, 1994; Mars et al., 2005; Matsumoto, Matsumoto, Abe, & Tanaka, 2007; see
Rushworth & Behrens, 2008).
In sum, the FRN is considered an indicator of reinforcement learning, and its
amplitude is considered to reflect the extent of this process.
1.4.3 LRP as an Indicator of Response Activation
The lateralized readiness potential (LRP; introduced by de Jong, Wierda, Mulder, &
Mulder, 1988, and Gratton, Coles, Sirevaag, Eriksen, & Donchin, 1988; see Eimer, 1998)
is a surface-negative waveform which has its maximum amplitude over the motor cortex.
Research has identified the primary generator of the LRP in the lateral premotor cortex and
the primary motor cortex (e.g., Böcker, Brunia, & Cluitmans, 1994; Leuthold & Jentzsch,
2001, 2002). The LRP which is typically observed when people make left- versus righthand responses provides an index for the preparation for and execution of a motor response
(Coles, 1989; Hackley & Miller, 1995; Masaki, Wild-Wall, Sangals, & Sommer, 2004).
More precisely, it indicates the extent to which the response of one hand is more strongly
activated than the response of the other hand. Therefore, its onset can be seen as a temporal
marker of the moment in which the brain has begun to make a decision for one response or
the other (see review by Smulders & Miller, forthcoming). As soon as such a decision is in
progress, the corresponding motor preparation produces a negative-going potential over the
motor cortex contralateral to the selected hand. The LRP is defined as the difference in
voltage between the contralateral and ipsilateral sites. By analyzing the LRP onset in
waveforms time-locked to either the stimulus or to the response onset, it can be inferred
whether the locus of an experimental effect within information processing is premotoric
versus motoric (Leuthold, Sommer, & Ulrich, 1996; Osman & Moore, 1993; Osman,
Moore, & Ulrich, 1995). Namely, the interval from stimulus onset to the onset of the
stimulus-synchronized LRP (S-LRP) is considered a relative measure of premotoric
processing time, whereas the interval between the onset of the response-synchronized LRP
and the response onset (LRP-R) is thought to be a relative measure of motor processing
time (Masaki et al., 2004). The question of whether the initiation of a movement as
indexed by the onset of the LRP can occur before the conscious decision for a motor
response (Libet, Gleason, Wright, & Pearl, 1983) has not definitely been answered (see
Introduction
32
Haggard & Eimer, 1999; Keller & Heckhausen, 1990; Soon, Brass, Heinze, & Haynes,
2008; Trevena & Miller, 2002, 2010).
The LRP has frequently been used to investigate the influence of advance
information on preparatory processes since it is very sensitive to whether there is an
opportunity to prepare for a movement (e.g., Gratton et al., 1990). A number of studies
have shown that precues carrying information about the hand with which a subsequent
response will have to be executed produce a measurable response activation prior to the
onset of a test-stimulus (e.g., Coles, 1989; Hackley & Miller, 1995; Leuthold, Sommer, &
Ulrich, 1996, 2004; Müller-Gethmann, Rinkenauer, Stahl, & Ulrich, 2000; Osman et al.,
1995). Moreover, it has been shown that probabilistic information can modulate the LRP
amplitude during the foreperiod. Several studies have reported an LRP occurring during
the foreperiod when the response hand was specified with high probability (e.g., Gehring,
Gratton, Coles, & Donchin, 1992; Gratton et al., 1990; Miller, 1998; but see Scheibe,
Schubert, Sommer, & Hekeeren, 2009). For instance, Miller and Schröter (2002)
demonstrated that sequentially presented information which provided probabilistic
information about what the most likely response would be affected the LRP such that
response preparation was increased near the end of the sequence due to the increased
conditional probability of this response.
1.4.4 N2 as an Indicator of Response Conflict
The label “N2” encompasses multiple negative-going voltage deflections peaking
200 to 400 ms after stimulus presentation (for a review, see Folstein & Van Petten, 2008).
Traditionally, the N2 has been associated with the inhibition of motor responses because its
amplitude is larger after no-go than after go stimuli in go/no-go tasks (e.g., Eimer, 1993;
Jodo & Kayama, 1992; Kok, 1986; Pfefferbaum, Ford, Weller, & Kopell, 1985) and on
stop-signal trials in stop-signal paradigms (e.g., Van Boxtel, Van der Molen, Jennings, &
Brunia, 2001) as well as on incongruent trials in Eriksen flanker tasks (e.g., Heil, Osman,
Wiegelmann, Rolke, & Henninghausen, 2000; Kopp, Rist, & Mattler, 1996).
But more recently it has been argued that the N2 component of the ERP does not
represent response inhibition, but an electrophysiological correlate of pre-response conflict
monitoring by the anterior cingulate cortex (ACC; Nieuwenhuis, Yeung, van den
Wildenberg, & Riddinkerhof, 2003). The ACC (see also 1.4.2) is a mid-frontal brain area
which appears to respond specifically to the occurrence of conflict during information
Introduction
33
processing and response selection (e.g., Barch et al., 2001; Botvinick et al., 2001;
Botvinick, Nystrom, Fissell, Carter, & Cohen, 1999; Braver, Barch, Gray, Molfese, &
Snyder, 2001; Carter et al., 1998; but see Burle, Allain, Vidal, & Hasbroucq, 2005). As
previously stated, the ACC is connected with motor regions (Dum & Strick, 1993, 1996;
Diehl et al., 2000; van Hoesen et al., 1993), which suggest a role in high-level motor
control and action selection. More precisely, the ACC is thought to be specifically
sensitive to response-related conflict as opposed to conflict in other processes (e.g., Liu,
Banich, Jacobson, & Tanabe, 2006; Milham & Banich, 2005). Response-related conflict is
present whenever two or more incompatible response tendencies are simultaneously
activated. The ACC then conveys a signal to brain areas involved in the actual execution of
control (such as the prefrontal cortex). Research in cognitive neuroscience has
demonstrated that the processes of controlling conflict are set in motion very early in the
response stream and can operate without conscious awareness (Berns, Cohen, & Mintun,
1997; Nieuwenhuis, Ridderinkhof, Blom, Band, & Kok, 2001). Research has confirmed
the ACC as a likely generator of the N2 (e.g., Bekker, Kenemans, & Verbaten, 2005; Chen
et al., 2008; Jonkman, Sniedt, & Kemner, 2007; Nieuwenhuis et al., 2003).
According to the conflict-monitoring theory of the N2 (Yeung, Botvinick, &
Cohen, 2004; Yeung & Cohen, 2006), conflict is defined as “the product of the activation
levels of the competing response units scaled by the strength of the inhibitory connection
between them” (Yeung & Cohen, 2006, p. 165). In a flanker paradigm, for example, on
incongruent trials the product of response activation levels is large so that conflict is high.
On congruent trials the product is low or even zero, resulting in little or no conflict. Thus,
the conflict-monitoring theory predicts a larger N2 on high-conflict trials relative to lowconflict trials. Moreover, on correct incongruent trials, the correct response is increasingly
activated and dominates competition around the time of motor response execution. This
dominance is maintained (perhaps also reinforced) after the response through continued
stimulus processing. Conflict is therefore observed primarily prior to response execution,
before activation of the incorrect response is completely suppressed by activation of the
correct response.
The hypothesis of a frontocentral “conflict N2” peaking about 200-300 ms after
stimulus onset has been supported by empirical data from several studies (e.g., Amodio,
Master, Yee, & Taylor, 2008; Bartholow & Dickter, 2008; Bartholow et al., 2005; Donkers
& van Boxtel, 2004; Enriquez-Geppert, Konrad, Pantev, & Huster, 2010; Fritzsche, Stahl,
& Gibbons, 2010; Mennes et al., 2008). For example, Van Veen and Carter (2002)
Introduction
34
recorded brain potentials in a flanker task and found ACC activation following incorrect
responses, but also preceding correct responses in high-conflict trials, which would be
predicted by the conflict theory. In the same way, Yeung et al. (2004) observed conflictrelated activity prior to correct responses. They argued that the error-related negativity
(ERN, Falkenstein, Hohnsbein, Hoormann, & Blanke, 1991; Gehring, Goss, Coles, Meyer,
& Donchin, 1993) reflects conflict monitoring occurring after an erroneous response
whereas the N2 reflects conflict monitoring before a correct response. Both the N2 and the
ERN peak when there is co-activation of the representations for correct and incorrect
responses, leading to conflict (e.g., Danielmeier, Wessel, Steinhauser, & Ullsperger, 2009;
Rodríguez-Fornells, Kurzbuch, & Münte, 2002; but see Burle, Roger, Allain, Vidal, &
Hasbroucq, 2008; Masaki, Falkenstein, Stürmer, Pinkpank, & Sommer, 2007). This is
consistent with results showing that the ERN and N2 share a very similar scalp
topography, with peak amplitude at FCz, and also a similar neural source (Nieuwenhuis et
al., 2003; van Veen & Carter, 2002; Yeung et al., 2004).
Several studies have found a relationship between N2/ERN amplitude and
individual differences in conflict monitoring. For example, van Boxtel et al. (2001)
detected a larger N2 elicited by a stop signal in participants who efficiently inhibited a
response. Similar findings were obtained by Stahl and Gibbons (2007) who reported a
relationship between smaller ERN amplitude and reduced response-conflict monitoring
and behavioral control in a stop-signal task. Fritzsche et al. (2010) reported that
participants responding fast and accurately in a localization task had strongly pronounced
N2 amplitudes in response to conflict, which were absent for poor performers. Another
study reported a correlation of stop-signal N2 amplitude with percentage of successful
inhibitions in children (Pliszka, Liotti, & Woldorff, 2000). Pailing, Segalowitz, Dywan,
and Davies (2002) found more pronounced ERN amplitudes in participants with more
controlled response strategies. Larger ERN (and N2) amplitudes compared to normal
controls have also been found in people with obsessive compulsive disorder who are
assumed to have an overactive conflict monitoring system (e.g., Endrass, Klawohn,
Schuster, & Kathmann, 2008; Gehring, Himle, & Nisenson, 2000; Ruchsow et al., 2005;
but see Nieuwenhuis, Nielen, Mol, Hajcak, & Veltman, 2005). Larger ERNs are also
correlated with reduced impulsivity (e.g., Potts, George, Martin, & Barratt, 2006; Martin &
Potts, 2009) and improved emotional regulation (e.g., Compton et al., 2008). A recent
study demonstrated a positive correlation between ERN amplitude in a Stroop task and
academic performance (Hirsh & Inzlicht, 2010). Conversely, attenuated ERNs have been
Introduction
35
observed in patients with damage to medial prefrontal cortex (e.g., Stemmer, Segalowitz,
Witzke, & Schönle, 2004; Swick & Turken, 2002; see also Ullsperger & von Cramon,
2006) and in patients with borderline personality disorder (e.g., de Bruijn et al., 2006). In
the domain of social cognition, Amodio, Devine and Harmon-Jones (2008; see also
Amodio et al., 2004) found a positive association between ERN amplitude and the ability
to control unintentional prejudices.
1.4.5 Interindividual Differences in Decision Making Reflected in Neural
Activity
Previous research has evidenced that in a number of different experimental settings,
interindividual differences in stable personality traits exhibit a modulatory influence on
activations of specific brain areas (see Engelmann, Damaraju, Padmala, & Pessoa, 2009),
also in the context of decision making (e.g., Berns et al., 2006; Cohen, Young, Baek,
Kessler, & Ranganath, 2005; de Quervain et al., 2004; Huettel, Stowe, Gordon, Warner, &
Platt, 2006; McCabe, Houser, Ryan, Smith, & Trouard, 2001; Paulus, Rogalsky, Simmons,
Feinstein, & Stein, 2003; Sanfey et al., 2003; Tom, Fox, Trepel, & Poldrack, 2007).
Apart from brain imaging studies, relations with interindividual differences in
decision making have also been demonstrated for electroencephalographic activity:
A recent study by Gianotti et al. (2009) found that individuals with higher risk
propensity showed lower baseline cortical activity in the right prefrontal cortex compared
to more risk-averse individuals. Knoch, Gianotti, Baumgartner, and Fehr (2010) also
measured resting EEG and found that baseline cortical activity in the right prefrontal
cortex predicted individuals‟ punishment behavior. In an ERP study using a flanker
paradigm, Boksem, Tops, Kostermans, and De Cremer (2008) found a relationship
between individual differences in punishment and reward sensitivity with the ERN:
Participants sensitive to punishment displayed a larger ERN when punished for incorrect
responses, while participants high on reward sensitivity showed a larger ERN when they
failed to receive a reward because of responding incorrectly. Balconi and Crivelli (2010)
and De Pascalis, Varriale, and D‟Antuono (2010) demonstrated a relationship between
individual punishment sensitivity and FRN amplitude. In a study by Hewig et al. (2007) it
was shown that participants who made high-risk decisions in black jack exhibited smaller
ERN amplitudes to such decisions compared to participants who followed a low-risk
strategy. Yeung and Sanfey (2004) found interindividual differences in risk-taking
Introduction
36
behavior after monetary losses which correlated with FRN amplitude. In particular,
participants exhibiting strong FRN amplitudes following losses yielded stronger shifts
toward risk-taking behavior than participants with small FRN amplitudes. Similar
associations between FRN amplitude and shifts in risk-taking behavior were also reported
by Yasuda et al. (2004). Frank et al. (2005) could show that participants who learned
mainly from mistakes and tried to avoid them (“negative learners”) exhibited a larger ERN
amplitude than participants who learned mainly from correct choices (“positive learners”).
Moreover, negative learners had relatively larger FRN amplitudes in response to negative
feedback (compared to positive feedback) than did positive learners. In a study by Polezzi
et al. (2010), participants had to choose between a risky and a safe option in a context with
overall expected zero value and a context with expected positive value. Based on risk
behavior, two groups of participants were identified: people who were risk-seekers in the
zero expected value context and people who were risk-seekers in the positive expected
value condition. The amplitude of the FRN reflected this distinction: in each group, FRN
amplitudes were sensitive to the magnitude of the outcome in the context in which decision
making behavior was random; FRN amplitudes were insensitive to the magnitude of the
outcome in the context in which participants were risk-seeking.
1.5
Summary and Theoretical Implications for the Studies
In summary, dual-process theories (for recent reviews, see Evans, 2008, or Weber
& Johnson, 2009) postulate that decision behavior reflects the interplay of two
fundamentally different types of processes, one of them fast, effortless, and requiring no
conscious control (automatic), and the other one slower, at least in part requiring conscious
supervision, and heavily demanding cognitive resources (controlled). The interaction of
these two types of processes is assumed to be responsible for decision behavior which is
not in accordance with the standard assumptions of rationality in economic paradigms,
such as violations of Bayesian updating (e.g., conservatism: Edwards, 1968; the
reinforcement heuristic: Charness & Levin, 2005; the representativeness heuristic:
Kahneman & Tversky, 1972). Event-related potentials as real-time indicators of
psychological processes (e.g., Macar & Vidal, 2004) seem well-suited to investigate the
conflict of automatic and controlled processes in decision making and to shed light on
interindividual differences in the reliance on the two types of processes (see Epstein et al.,
1996; Betsch, 2004, 2007).
Study 1a
37
2
Studies
2.1
Study 1a: Bayesian Updating and Reinforcement Learning –
Investigating Rational and Boundedly Rational Decision Making
by means of Error Rates and the FRN
2.1.1 Overview
Results of many studies have shown that people systematically deviate from
Bayesian rationality (e.g., Charness & Levin, 2009; Ouwersloot et al., 1998; Tversky &
Kahneman, 1982; Zizzo et al., 2000), although Bayesian updating obviously plays an
essential role for human decision making (e.g., El-Gamal & Grether, 1995; Griffiths &
Tenenbaum, 2006). Taking into account a dual-process perspective (see Evans, 2008;
Sanfey & Chang, 2008; Weber & Johnson, 2009) and assuming different rationality levels
analog to different levels of automaticity versus controlledness (Ferreira et al., 2006), it can
be argued that the seemingly inconsistent findings result from the activity of several
processes: a basic, rational, controlled process leading to Bayesian updating, and a more
automatic process leading to deviations from the Bayes‟ rule. Depending on situational and
person factors, the controlled process is often perturbed by the automatic process, which
results in the use of other decision rules, such as heuristics.
In the present study, the focus is on one such decision rule, namely the
reinforcement heuristic (Charness & Levin, 2005). The results of Charness and Levin
indicate that decisions are influenced by a heuristic based on reinforcement learning (a
“win-stay, lose-shift” principle) which sometimes competes with Bayesian updating. It is
postulated here that the reinforcement heuristic corresponds to a process that is more
automatic in character than the process leading to decisions in accordance with the Bayes‟
rule. First, reinforcement learning (e.g., Schultz, 1998) is associated with extremely fast
and unconscious brain responses (Holroyd & Coles, 2002). Second, empirical evidence for
this argument has already been presented by Achtziger and Alós-Ferrer (2010) who
investigated response times in the paradigm of Charness and Levin (see 1.2.2.3). However,
response times usually do not purely reflect “on-line” thinking because they are influenced
by processes of action preparation and action implementation so that relevant cognitive
processes are confounded with irrelevant motor processes (Bartholow & Dickter, 2007).
Moreover, it is likely that all behavioral responses involve some combination of automatic
and controlled processes; thus, measures of error rates and response times do not provide
Study 1a
38
pure indicators of automaticity and controlledness (Jacoby, 1991). A much more finegrained analysis of underlying mechanisms can be provided by investigating event-related
potentials (ERPs) which are time-locked to certain stimulus events (Fabiani et al., 2000)
and reflect psychological processes related to thinking, feeling, or behavior (e.g., Macar &
Vidal, 2004). The FRN (see 1.4.2) seems especially suitable for investigating the
reinforcement heuristic since the amplitude of this component is thought to reflect the
extent of reinforcement learning (Holroyd & Coles, 2002; Holroyd et al., 2002; Holroyd et
al., 2003; Holroyd et al., 2005). Therefore, decision makers who mainly rely on the
reinforcement heuristic are assumed to differ from Bayesian updaters concerning FRN
amplitude.
Research in psychology has shown that there are relatively stable interindividual
differences in the use of decision making strategies (Betsch, 2004; Betsch & Kunz, 2008;
Epstein et al., 1996; Pacini & Epstein, 1999). Epstein and colleagues found that people
differ regarding trust in their experiential system (faith in intuition) and regarding
enjoyment of cognitive activity (need for cognition). Since faith in intuition is
characterized by heuristic processing as described by Kahneman et al. (1982), it can be
hypothesized that decision makers high in this concept rely on the reinforcement heuristic
to a greater extent than Bayesian updaters.
Whether a particular decision problem implies an automatic or a controlled process
does not only depend on person characteristics, but also on situational factors (e.g.,
Epstein, 2007; Gilbert, 2002; Masicampo & Baumeister, 2008; Rieskamp & Hoffrage,
1999; Weber & Lindemann, 2008). For instance, the more controlled process might
dominate the automatic one in certain individuals if performance-related monetary
incentives are introduced (see Beach & Mitchell, 1978; Payne et al., 1988, 1992). Charness
and Levin (2005) reported a negative relationship between cost and frequency of a decision
error within participants. Achtziger and Alós-Ferrer (2010) additionally tested for the
effect of varied monetary incentives between participants and surprisingly found that a
treatment with doubled incentives did not result in significantly lower error rates. The
present study will again investigate if an effect of financial incentives will manifest
between participants, such that decision makers receiving a high monetary amount for
every successful decision will decide more rationally and rely less on the reinforcement
heuristic, compared to decision makers receiving a lesser amount of money.
In order to test these ideas participants made a series of two-draw decisions
according to the paradigm of Charness and Levin (2005), also used in Achtziger and Alós-
Study 1a
39
Ferrer (2010). For every successful decision, participants received 9 cents or 18 cents,
depending on incentive condition. EEG was recorded while participants made their
decisions and were presented with feedback about the outcomes of their draws. As
dependent variables, error rates and the feedback-related negativity in response to decision
outcomes were measured, and various person characteristics were assessed by means of a
questionnaire.
One of the research aims was to replicate the main results of Charness and Levin
(2005), that is, a low error rate when Bayesian updating and the reinforcement heuristic are
aligned, and a high error rate when the two strategies are opposed. But beyond that, the
present study was intended to shed light on the processes underlying participants‟ decision
behavior. Therefore, the processing of feedback on decision outcomes was investigated by
means of neuroscientific methods. Since the processing of feedback is reflected in the
amplitude of the FRN (e.g., Gehring & Willoughby, 2002, 2004; Hajcak, Holroyd, et al.,
2005; Holroyd & Coles, 2002; Miltner et al., 1997), this amplitude was assumed to be
related to individual differences in the rate of reinforcement errors. It was expected that
participants with a high rate of reinforcement mistakes (i.e., participants who often rely on
the reinforcement heuristic) would process negative feedback more strongly in the sense of
reinforcement learning than participants with fewer reinforcement mistakes. This would
produce a more pronounced FRN in response to negative feedback. Moreover, the
relationship between person characteristics (e.g., skills in statistics or faith in intuition) and
decision behavior was investigated to a greater extent than in the study by Charness and
Levin. In general, participants with high statistical skills were expected to commit fewer
decision errors. Furthermore, faith in intuition was predicted to be relevant concerning the
use of the reinforcement heuristic. It was expected that intuitive participants would rely
more strongly on the reinforcement heuristic than participants who describe themselves as
relying less on their gut feeling when making decisions. Apart from that, the influence of
monetary incentives was examined in a between-participants design. It was predicted that
participants who received a greater amount of money for their successful decisions would
apply controlled strategies to a greater extent than participants who received a lesser
amount of money.
Study 1a
40
Thus, the hypotheses of Study 1a were the following8:
H1: The presence or absence of a decision conflict influences decision making:
Switching-error rates are very low when Bayesian updating and the reinforcement heuristic
are aligned (after right draws), but are quite high when these are opposed (after left draws).
H2: Monetary incentives manipulated between participants influence decision
making: Error rates are lower for participants who receive 18 cents per successful draw
than for participants who receive 9 cents per successful draw.
H3: Skills in statistics and faith in intuition affect decision making: Error rates for
all types of errors are lower for participants with higher statistical skills. Participants high
in faith in intuition commit more mistakes after draws from the left urn (reinforcement
mistakes) than participants low in faith in intuition.
H4: The amplitude of the FRN is related to decision making: There is a positive
relationship between the rate of reinforcement mistakes and the magnitude (negativity) of
the FRN amplitude after a negative outcome of the first draw9.
2.1.2 Method
Participants
48 right-handed participants (24 male, 24 female) with normal or corrected-tonormal vision, ranging in age from 19 to 29 years (M = 22.2, SD = 2.49), were recruited
from the student community at the University of Konstanz. Students majoring in
economics were not allowed to take part in the study. In exchange for participation,
participants received 5 € plus a monetary bonus that depended upon the outcomes of the
computer task, as described below. All participants signed an informed consent document
before the start of the experiment. The individual sessions took approximately 90 minutes.
Two of the participants were excluded from data analysis for the following reasons:
One person did not follow the instructions and thus did not properly accomplish the task.
The other person had a poor understanding of the rules of the decision task (as assessed by
the experimenter) and reported having a very low desire to perform well.
8
Note: Due to the characteristics of the experimental procedure (fixed interval between presentation of
feedback and second draw which was necessary for EEG measurement), there were no hypotheses regarding
response times. As Horstmann, Hausmann, and Ryf (2010) point out, the analysis of decision times is not
meaningful if participants are forced to indicate their decision only after a certain time-frame has elapsed.
9
This prediction was made only for the FRN after negative feedback, but not after positive feedback, because
the FRN is especially pronounced after negative feedback (see 1.4.2; e.g., Hewig et al., 2007; Holroyd et al.,
2004; Yeung & Sanfey, 2004).
Study 1a
41
Design
The study followed a 2 between (magnitude of incentives: high – 18 cents per
successful draw vs. low – 9 cents per successful draw) x 2 between (counterbalance:
payment for drawing blue balls vs. payment for drawing green balls10) x 3 within (first
draw: forced left vs. forced right vs. free) mixed-model design.
Dependent variables were error rates and ERPs (FRN) measured during the
computer task and different person variables assessed by means of a final questionnaire
(see below). These were, for example, faith in intuition, skills in statistics, and grade in the
Abitur (final high school exam in Germany) and in mathematics.
Material
Individual experiments were conducted by two female experimenters in three
different laboratories at the University of Konstanz. Each participant was seated
comfortably in front of a desk with a computer monitor and a keyboard.
The experiment was run on a personal computer (3RSystems K400 BeQuiet; CPU:
AMD Athlon 64x2 Dual 4850E 2.5 GHz; 2048 MB RAM; graphic board: NVIDIA
GeForce 8600 GTS with 256 MB RAM; hard drive: Samsung HDD F1 640GB SATA;
operating system: Windows XP Professional) using Presentation ® software 12.2
(Neurobehavioral Systems, Albany, CA).
Stimuli were shown on a 19‟‟ computer monitor (Dell P991, resolution: 1024 x 768
pixels, refresh rate: 100 Hz) at a distance of about 50 cm. The two keys on the keyboard
(Cherry RS 6000) that had to be pressed in order to select an urn („F‟ key and „J‟ key) were
marked with glue dots to make them easily recognizable.
Stimuli were images of two urns containing 6 balls each (image size 154 x 93
pixels) and images of colored balls (blue and green, image size 40 x 40 pixels) (see Figure
2) shown on the screen of the computer monitor.
10
Charness and Levin (2005) paid their participants (in conditions I and II) only for drawing black balls,
thereby neglecting that some characteristics of the stimuli (e.g., the color white) are more easily associated
with positive valence (e.g., by getting rewarded) than others (e.g., the color black), which has frequently been
shown in studies from psychology and neuroscience (see Palmer & Schloss, 2010).
Study 1a
A
42
B
Figure 2. Stimulus pictures on the computer screen: (A) Two urns with concealed balls; in-between
the urns fixation cross; below the urns feedback on the first and second draw. In this example, a
blue ball was drawn from the left urn and a green ball was drawn from the right urn (in this order or
in reversed order). (B) In case of forced first draws (see below), the urn which could not be chosen
was greyed out.
In order to reduce eye movements, the distance between the two urns was kept to a
minimum (60 pixels). To control for effects of color brightness on EEG data, a blue and a
green color of the same brightness were chosen for the balls (blue: RGB value 0-0-255;
green: RGB value 0-127-0).
Procedure
The computer program, instructions, and questionnaires were based considerably
on Achtziger and Alós-Ferrer (2010).
Participants arrived at the laboratory individually where they were greeted by the
experimenter and were asked to sit down in front of the PC. They gave written informed
consent for the experiment and their handedness was tested by means of a short
questionnaire.
After application of the electrodes, each participant was asked to read through the
instructions explaining the experimental set-up (see appendix C). The instructions were
very elaborate and described the rules of the decision task in detail, also showing screen
shots of the computer program. The participant was told that after reading through the
instructions, he/she would be asked to explain the experimental set-up and rules, and
would have to answer questions concerning the mechanism of making choices. The
experimenter paid attention to whether the central aspects had been comprehended and
gave explanations in case the participant had any difficulties with the rules or mechanisms.
However, the experimenter did not answer questions concerning possible strategies or
Study 1a
43
statistical considerations. Finally, by means of an experimental protocol, the experimenter
made a global assessment of the participant‟s understanding (see appendix C) and asked
for the expected amount of money the participant thought he/she would earn during the
computer task. This elaborate proceeding was meant to ensure that every participant was
fully aware of all aspects of the experiment. Moreover, participants were instructed to
move as little as possible during the computer task, to keep their fingers above the
corresponding keys of the keyboard, and to look straight on at the fixation cross whenever
it appeared on the screen.
Under supervision of the experimenter, each participant completed three practice
trials in order to get accustomed to the computer program. These trials did not differ from
the following 60 experimental trials except for the fact that winning balls were not paid for.
After it was ensured that the participant completely understood the whole
procedure, the Bayesian updating experiment was started during which the EEG was
recorded. For this time span, the experimenter left the room.
During the EEG recording participants were presented with two urns, one on the
left side and one on the right side of the computer screen, very similar to the experiment
reported by Charness and Levin (2005) (see Figure 2; see Achtziger & Alós-Ferrer, 2010).
Each of these urns was filled with 6 balls which were either blue or green, and were
concealed to the participant.
The participant‟s task consisted of choosing one of the urns by pressing one of two
keys of the keyboard („F‟ key for the left urn and „J‟ key for the right urn), whereupon the
program would “draw” one of the balls in the chosen urn randomly. Depending on
counterbalancing, the participant was only paid for drawing blue or green balls. Monetary
incentives were also manipulated between participants: One group of participants earned
18 cents for every successful draw, compared to only 9 cents in the other group.
There were 60 trials in which the participant made two draws with replacement.
After the participant had chosen an urn for the first draw, the program randomly “drew”
one of the balls in this urn. The color of the drawn ball was revealed and the ball was
replaced. Then, the participant chose an urn for his/her second draw, whereupon the color
of the drawn ball was revealed, too, and he/she could move on to the next trial.
Participants were told that there were two possible states of the world (Up and
Down) which decided about the number of blue and green balls in the two urns and which
both had a prior probability of 50 %. In the left urn, there was always a mix of blue and
green balls, whereas the right urn contained balls of only one color. It was not revealed
Study 1a
44
which of the two states was actually present. The state of the world of did not change
within a trial.
The distribution of balls in the two urns was as follows: In the up state, the left urn
was filled with 4 rewarded balls and 2 non-rewarded balls and the right urn was filled with
6 rewarded balls. In the down state, the left urn contained 2 rewarded balls and 4 nonrewarded balls, while the right urn contained 6 non-rewarded balls. Figure 3 provides an
overview of the distribution of balls in the two urns for a participant who is rewarded for
drawing blue balls (in the other counterbalance condition, the distribution of balls was vice
versa):
State of the
World
Probability
Left Urn
Right Urn
Up
50 %
4 blue,
2 green
6 blue
Down
50 %
2 blue,
4 green
6 green
Figure 3. Distribution of balls in the two urns if one is rewarded for drawing blue balls.
At the beginning of a trial, the possible distribution of balls in the two urns (see Figure 4)
was presented until the participant pressed the space bar.
Figure 4. Stimulus picture on the computer screen if one is rewarded for drawing blue balls:
Overview of the distribution of balls in the two urns.
Study 1a
45
Followed by an an inter-stimulus-interval of 500 ms (fixation cross, 17 x 17 pixels), the
two urns (see Figure 2) were presented until the participant chose one urn by either
pressing the „F‟ key (left) or the „J‟ key (right) with the corresponding index finger. The
outcome of the “draw” was determined randomly according to the proportions of blue and
green balls in the respective urn. After an inter-stimulus-interval of 1000 ms, feedback in
the form of the color of the drawn ball was presented under the chosen urn for another
1000 ms. Subsequently, an instruction to proceed to the second draw by pressing the space
bar appeared at the bottom of the screen until the participant did so. Then the instruction
disappeared and again the two urns plus the feedback on the first draw were shown until
the participant chose an urn for the second draw by pressing the according key. Again,
after an inter-stimulus-interval of 1000 ms, feedback in the form of the color of the second
drawn ball was presented under the chosen urn for 1000 ms. Next, an instruction to go on
to the next trial appeared under the urns until the participant pressed the space bar,
followed by the overview of possible distributions again. Figure 5 provides an overview of
the sequence of events in the computer experiment.
Instruction: Next Trial (RT)
Feedback 2 (1000 ms)
ISI (1000 ms)
Time
Choice Period (RT)
Instruction: Next Draw (RT)
Feedback 1 (1000 ms)
ISI (1000 ms)
Choice Period (RT)
ISI (500 ms)
Information (RT)
Figure 5. Schematic representation of the temporal sequence of events during one trial of the
computer task.
Study 1a
46
As a certain constraint, the first draw of every odd-numbered trial (i.e., trial 1, 3, 5...) was
required to be from the left urn (forced left) and, for every even-numbered trial (i.e., trial 2,
4, 6...) the first draw was required to be from the right urn (forced right) during the first 30
trials 11.
Depending on the time the participant took for his/her decisions, the experiment
lasted about 10-15 minutes. After all 60 experimental trials were completed, the amount of
money earned during the task was displayed on the screen. Afterwards, the cap and
external electrodes were removed from the participant.
Following the computer experiment, participants filled out a questionnaire (see
appendix C) which measured several control variables: valence and arousal of the two balls
(Self-Assessment Manikin [SAM]; Hodes, Cook, & Lang, 1985; Bradley & Lang, 1994),
the Rosenberg self-esteem scale (Rosenberg, 1965, German version: Ferring & Filipp,
1996; German revision: von Collani & Herzberg, 2003), the Multiple Affect Adjective
Checklist (MAACL; Zuckerman & Lubin, 1965), faith in intuition and need for cognition
(Rational-Experiential Inventory [REI]; Epstein et al., 1996; German version: Keller et al.,
2000), the Self-Control Scale (Tangney, Baumeister, & Boone, 2004). Furthermore,
participants‟ subjective value of money was measured by asking the following three
questions: “Stellen Sie sich vor, Sie hätten 150 € im Lotto gewonnen. Wie stark würden
Sie sich über den Gewinn freuen?“ [“Imagine you have won € 150 in the Lotto. How
intense would be your joy?“]; “Stellen Sie sich vor, Sie hätten 30 € verloren. Wie stark
würden Sie den Verlust bedauern?“ [“Imagine you have lost € 30. How intense would be
your sorrow?“]; “Stellen Sie sich bitte Folgendes vor: Sie haben gerade ein Fernsehgerät
für 500 € gekauft. Ein Bekannter erzählt Ihnen, dass er nur 450 € für genau das gleiche
Gerät bezahlt hat. Wie stark würden Sie sich deswegen ärgern?“ [“Imagine you have just
bought a TV-set for € 500. Now, an acquaintance tells you that he paid only € 450 for
exactly the same TV-set. How intense is your anger?“] (see Brandstätter & Brandstätter,
1996). Participants were asked to score their responses to these questions by placing a
mark on an analog scale (length: 10 cm), continuously ranging from 0 (“gar nicht” [“I
wouldn‟t care at all”]) to 10 (“sehr” [“I‟d be extremely delighted/upset/angry”]).
11
The consideration behind this restriction was to ensure a sufficient number of first draws from the left side,
since decision behavior in such situations which require the participant to update prior probabilities was the
main research interest. For the second draw, there were no requirements in any of the trials. From trial 31 on
participants could choose whichever urn they wished for all draws. Charness and Levin (2005) restricted the
number of forced trials to the first 20 periods. In the present study as in Achtziger and Alós-Ferrer (2010),
another 10 forced trials were added in order to ensure a sufficient number of first draws from the left urn,
especially since a Bayesian participant would always start with a draw from the right urn if allowed to do so
(see below).
Study 1a
47
Moreover, participants assessed their skills in statistics and stochastics by answering the
following questions, again by placing a mark on a 10 cm continuous analog scale: “Wie
schätzen Sie Ihre allgemeinen statistischen Vorkenntnisse ein?” [“How would you judge
your previous knowledge in statistics?”]; “Wie schätzen Sie Ihre Fähigkeiten in der
Berechnung von Wahrscheinlichkeiten ein?” [“How would you judge your skills in
calculating probabilities?”]. Answers ranged from 0 (“sehr schlecht” [“very poor”]) to 10
(“sehr gut” [“very good”]). In addition, person variables like the difficulty of the
experiment, effort, and importance of good performance were assessed. Finally,
participants were asked for demographic information.
When the participant had completed the questionnaire, he/she was given the
opportunity to wash his/her hair. Finally, participants were thanked, paid, and debriefed.
Mathematical Background
The experiment is conceptualized in such a way that both urns offer the same
expected payoff if there is only one draw, but for two draws, the expected payoff is higher
when the first ball is drawn from the right side12. This is because a first draw from the right
urn reveals the state of the world since the right urn always contains balls of only one
color. In contrast, after a draw from the left urn, there still remains uncertainty whether the
state is up or down. This information about the present state of the world improves the
expected payoff from the second draw. Altogether, choosing the right urn for the first draw
represents the optimal choice; beginning from the left side is termed a first-draw mistake13.
The advisements for the second draw are as follows: If the participant draws a ball
from the right side in his/her first draw, the feedback perfectly reveals the state of the
world because the right urn always contains balls of only one color. Thus, the decision for
the second draw is simple: If the participant draws a winning ball from the right urn, he/she
must be in the up state and the rational choice for the second draw would be to choose the
right urn again. If the participant draws a losing ball, the current state of the world must be
down and he/she should make the second draw from the left side. Therefore, the
advisement is “win-stay, lose-shift”, which is in accordance with the reinforcement
heuristic. The respective decisions to stay or shift following a successful or unsuccessful
draw both „feel right‟ and are „rational‟. If a participant makes an error in this context, this
12
The difference in expected payoffs between the correct decision (in Bayesian terms) and any type of error
can be seen from Table 3 (Study 1a) and Table 9 (Study 1b).
13
This only refers to the last 30 trials in which the participant was free to choose one or the other urn for the
first draw.
Study 1a
48
error is considered to be an understanding mistake since the participant obviously has not
fully understood the rules of the experiment or did not pay full attention during the draw.
After a first draw from the left urn, the feedback does not fully reveal the present
state of the world; however, the feedback can be used to update prior beliefs about this
state (prior probabilities). In mathematical terms, after drawing a winning ball from the left
urn, a second draw from the right urn stochastically dominates a second draw from the left
urn. The decision maker should switch to the right urn since, according to the Bayes‟ rule,
the probability of success when choosing the right urn is higher than when choosing the
left urn again. The situation after drawing a losing ball from the left side is analog: In this
case, according to the Bayes‟ rule, the expected payoff of the left urn is higher than the
expected payoff of the right urn, so the participant should stay with the left urn in his/her
second draw. Therefore, the advisement is “win-shift, lose-stay”, which is not aligned with
reinforcement learning models that would make contrary predictions: Drawing a ball that is
paid for signals success and is likely to reinforce continued drawing from the same urn.
The opposite signals failure and is likely to reinforce switching to the other urn. On the
other hand, the participant may sense that drawing a winning ball raises the likelihood that
the state of the world is up, so he/she would have a reason to switch to the right side.
Likewise, drawing a losing ball raises the likelihood that the state of the world is down, so
he/she would have a reason to stay with the left side. So there are two opposite forces
influencing the decision of the participant. If a participant makes an error in this context,
this error is considered a reinforcement mistake since the reinforcement heuristic has
obviously dominated choice according to the Bayes‟ rule.
Figure 6 provides an overview of the possible draws and types of errors14:
14
Note that due to the configuration of balls in the urns, only understanding mistakes after negative feedback
(right-lose-stay mistakes) necessarily lead to negative feedback on the second draw, whereas reinforcement
mistakes and right-win-shift mistakes may also result in positive feedback (drawing a winning ball). This
means that due to the probabilistic nature of the task, a wrong choice in terms of Bayesian updating may
nevertheless be a successful choice in terms of positive feedback and may therefore be paid, and vice versa.
Study 1a
49
Figure 6. Overview of types of errors.
Participants were not briefed on optimal choices or possible strategies. Nevertheless,
elaborate instructions ensured that they had fully comprehended the experimental set-up
and rules (see above).
EEG Acquisition and Analysis
Data were acquired using Biosemi Active II system (BioSemi, Amsterdam, The
Netherlands, www.biosemi.com) and analyzed using Brain Electrical Source Analysis
(BESA) software (BESA GmbH, Gräfelfing, Germany, www.besa.de).
The continuous EEG was recorded during the computer task which had a duration
of approximately 10-15 minutes. The participants were seated in a dimly lit room. Data
were acquired using 64 Ag-AgCl pin-type active electrodes mounted on an elastic cap
(ECI), arranged according to the 10-20 system, and from two additional electrodes placed
at the right and left mastoids. Eye movements and blinks were monitored by electrooculogram (EOG) signals from two electrodes, one placed approximately 1 cm to the left
side of the left eye and another one approximately 1 cm below the left eye (for later
Study 1a
50
reduction of ocular artifact). As per BioSemi system design, the ground electrode during
acquisition was formed by the Common Mode Sense active electrode and the Driven Right
Leg passive electrode. Electrode impedances were maintained below 20kΩ. Both EEG an
EOG were sampled at 256 Hz. All data were re-referenced off-line to a linked mastoids
reference and corrected for ocular artifacts with an eye-movement correction algorithm
implemented in BESA software. Feedback-locked data were segmented into epochs from
500 ms before to 1000 ms after stimulus onset; the baseline of 500 ms was used for
baseline correction. Epochs locked to first-draw positive and negative feedback were
averaged separately, producing two average waveforms per participant. Epochs including
an EEG or EOG voltage exceeding +/-100 μV were omitted from the averaging in order to
reject trials with excessive electromyogram (EMG) or other noise transients. Grand
averages were derived by averaging these ERPs across participants. Before subsequent
analyses, the resulting ERP waveforms were filtered with a high-pass frequency of 0.1 Hz
and a low-pass frequency of 30 Hz.
Six participants were eliminated from further ERP analysis due to excessive trials
containing artifacts and consequently too few valid trials to ensure reliable FRN indices15.
An average of 24 % of the trials from the non-excluded participants was excluded.
To quantify the feedback negativity in the averaged ERP waveforms for each
participant, the peak-to-peak voltage difference between the most negative peak occurring
200-300 ms after feedback onset and the positive peak immediately preceding this negative
peak was calculated (e.g., Frank et al., 2005; Yeung & Sanfey, 2004). This window was
chosen because previous research has found the feedback negativity to peak during this
period (Gehring & Willoughby, 2002; Holroyd & Coles, 2002; Miltner et al., 1997).
Consistent with previous studies, FRN amplitude was evaluated at channel FCz, where it is
normally maximal (Holroyd & Krigolson, 2007; Miltner et al., 1997).
2.1.3 Results
Preliminary Remark concerning the Analysis of Error Rates
Error rates can be analyzed aggregatedly over a group of participants or
individually for each participant. For instance, when comparing the frequency of win-stay
mistakes in the two incentive conditions, there are two possible procedures for the
15
Data of these participants were included in the analysis of behavioral observations (e.g., error rates) and
self-reports, though.
Study 1a
51
statistical analysis: One could either count the number of situations in which win-stay
mistakes were possible (first draw left and successful) and the number of following errors
(second draw left) in each condition, resulting in one percentage value for each condition.
Or one could conduct the analysis described above for each participant and then combine
the percentages of participants in the same condition. Both approaches lead to certain
problems. The aggregated analysis does not account for the possibility that the results of a
single person can dramatically affect the cumulative result of the group. The individual
analysis neglects the fact that the individual percentages can be based on completely
different populations (e.g., 2 of 4 potential win-stay mistakes and 10 of 20 potential winstay mistakes both result in a 50 % error rate). Therefore, the application of standard
parametric procedures to data analysis is scarcely possible. Accordingly, non-parametric
statistics were used to analyze error rates. For analysis of individual error rates, the MannWhitney U test (Wilcoxon, 1945; Mann & Whitney, 1947) instead of a t-test for unpaired
samples was used. When analyzing aggregated error rates, Fisher‟s exact probability test
was applied as an alternative to a Chi-square test for 2 x 2 contingency tables or
imbalanced tables, for instance of the form {Condition} x {Correct, Error}. All tests are
two-tailed unless otherwise specified.
First Overview
At the beginning of analyses, it was examined how (according to self-evaluations
by the participants) the participants had worked on the task, how they had perceived the
task, and how they rated their skill level for math and statistics. Most ratings were given on
an analog scale (length: 10 cm; i.e., scores ranged continuously between 0 and 10).
In general, participants had a very good understanding of the rules of the
experiment (on a 5 cm analog scale: M = 4.32, SD = 0.78) provided by means of detailed
written instructions and assessed by the experimenter. They self-reported that they put
much effort in accomplishing the experiment properly (M = 8.15, SD = 1.26), and found it
important to perform well (M = 6.95, SD = 1.65). On average, participants considered the
task not very difficult (M = 2.91, SD = 2.01). Self-assessed general statistical skills (M =
3.89, SD = 2.62) and skills in calculating probabilities (M = 4.95, SD = 2.40) turned out to
be rather moderate. Concerning the sources of such skills, most participants referred to
previous courses of study encountered throughout their education. Some participants
attributed their ability to games, extracurricular studies or former (similar) experiments at
university. 20 of 45 participants (44.4 %) majored in a subject with obligatory statistics
Study 1a
52
courses and 11 of 46 (23.9 %) participants declared they were already experienced with the
type of decision scenario implemented in the experiment.
On average, participants won 8.56 €; amounts varied between the two incentive
conditions, ranging from 4.59 € to 12.96 €. The duration of the computer task averaged
about 12 minutes (see Table 1).
Table 1
Amount won and duration of the computer task depending on magnitude of incentives
Amount Won in €
Incentives
Duration in Minutes
M (SD)
min
max
M (SD)
Median (IQR)
min
max
High
11.67 (0.87)
9.90
12.96
11.38 (1.55)
11.15 (1.69)
8.80
14.40
Low
5.72 (0.51)
4.59
6.57
12.20 (2.28)
12.03 (4.33)
8.50
15.47
Total
8.56 (3.08)
4.59
12.96
11.81 (1.98)
11.59 (2.95)
8.50
15.47
Equivalence of Groups
When examining differences between experimental conditions regarding the
answers to the questionnaire, one has to keep in mind that participants received the
questionnaire after completing the computer task. Therefore, it cannot be completely ruled
out that characteristics of the behavioral task influenced the later answering of the
questionnaire. For instance, it might be possible that a relatively low win amount led to
lower self-assessment of statistical skills.
Incentives. Before the main analyses, it was tested for differences between the
high-incentive (n = 22) and the low-incentive condition (n = 24) concerning several factors
that might have had an influence on the behavioral results.
Contrary to expectations, the amount of money participants estimated they would
win (prior to the start of the computer task) was only marginally significantly higher in the
high-incentive condition (M = 7.10, SD = 4.86) than in the low-incentive condition (M =
5.13, SD = 4.48), t(44) = 1.43; p = .08 (one-sided).
An undesired difference between conditions was found concerning previous
experience with similar decision scenarios: 9 of 24 (37.5 %) participants in the lowincentive group vs. 2 of 22 (9.1 %) participants in the high-incentive group declared they
were already experienced with similar scenarios, which constitutes a significant difference,
χ²(1, N=46) = 5.09, p < .05. In a similar fashion, there was a marginally significant
Study 1a
53
difference in the number of participants who majored in a subject with at least one
obligatory statistics course (low-incentive condition:13 of 23 = 56.5 %; high-incentive
condition: 7 of 22 = 31.8 %), χ²(1, N=45) = 2.78, p = .10.
Apart from that, there were no other noteworthy differences between these two
conditions (all p-values > .186). This means that conditions did not differ regarding
participants‟ experimenter-assessed understanding of the instructions, effort made to
accomplish the experiment properly, importance of good performance, self-esteem, affect,
latest grade in mathematics, grade in the Abitur, faith in intuition, need for cognition, selfcontrol or subjective value of money.
Counterbalancing. As half of the participants were paid for drawing blue balls (n =
23) and the other half for drawing green balls (n = 23), it was checked whether this
manipulation led to systematic differences between groups.
First, it was tested if stimuli were evaluated differently in the two counterbalance
conditions (see Table 2). In line with expectations, participants who were paid for a certain
color gave significantly more positive ratings (on a discrete scale from 1 to 5) than
participants who were not paid for this color (blue balls: t(44) = 13.48, p < .001; green
balls: t(43) = 13.23, p < .001). Regarding arousal, the two groups did not differ in their
evaluations (p-values > .280).
Table 2
Evaluation of stimuli depending on counterbalance condition
Winning Color Blue
M (SD)
Winning Color Green
M (SD)
Blue Ball
4.43 (0.59)
1.83 (0.72)
Green Ball
1.91 (0.81)
4.57 (0.51)
Blue Ball
2.78 (0.85)
2.91 (0.92)
Green Ball
3.17 (1.07)
2.77 (1.38)
Valence
Arousal
Unexpectedly, t-tests for unpaired samples revealed significant differences between the
two counterbalance conditions for several variables of the final questionnaire. Participants
paid for drawing blue balls (M = 4.07, SD = 0.97) had a significantly lower experimenterassessed understanding of the rules than those paid for drawing green balls (M = 4.57, SD
= 0.41), t(29.570) = 2.30; p < .05. Further, these participants (M = 7.76, SD = 1.55)
Study 1a
54
reported to have put significantly less effort in accomplishing the experiment properly than
participants paid for drawing green balls (M = 8.55, SD = 0.72), t(30.955) = 2.22, p < .05.
Moreover, there were two marginally significant effects: participants paid for drawing blue
balls reported worse skills in calculating probabilities (M = 4.32, SD = 2.41; green: M =
5.58, SD = 2.27), t(44) = 1.82; p = .08), and expected to win a lower amount of money (M
= 4.79, SD = 3.27; green: M = 7.36, SD = 5.60) t(35.414) = 1.90; p = .07. Apart from that,
there were no other noteworthy differences between the two counterbalance conditions (all
p-values > .237). Due to these negligible differences (i.e., these differences are not
expected to influence the main dependent variables in an important way), data of both
conditions were pooled. Nonetheless, as a precaution, the dummy representing
counterbalance condition was included in the regression analyses (see below).
Error Rates
Table 3 provides an overview over the error rates for the five different types of
errors (ignoring the distinction between decisions after a forced first draw and a free one).
Table 3
Error frequencies depending on magnitude of incentives
First-Draw
Mistakes
Win-Stay
Mistakes
Lose-Shift
Mistakes
Right-WinShift Mistakes
Right-LoseStay Mistakes
High
34.8 % (660)
[2]
65.1 % (275)
[2]
47.4 % (285)
[2]
3.9 % (381)
[6]
12.4 % (379)
[6]
Low
26.9 % (720)
[1]
64.6 % (280)
[1]
56.6 % (274)
[1]
5.3 % (435)
[3]
10.2 % (451)
[3]
Total
30.7 % (1380)
64.9 % (555)
51.9 % (559)
4.7 % (816)
11.2 % (830)
Incentives
Note. Total number of observations in brackets and error costs (in cent) in square brackets
Predicting that participants would commit only a few errors when Bayesian updating and
the reinforcement heuristic are aligned, and many errors when the two strategies are
opposed (H1), error rates for understanding mistakes (errors after a first draw from the
right urn) and reinforcement mistakes (errors after a first draw from the left urn) were
compared. When starting with the right urn, 131 of 1646 (8.0 %) decisions were incorrect,
in contrast to 650 of 1114 (58.3 %) decisions when starting from the left. This means that
as in the original study by Charness and Levin (2005) and in the study by Achtziger and
Alós-Ferrer (2010), error rates for win-stay mistakes and lose-shift mistakes (i.e.,
Study 1a
55
reinforcement mistakes) were considerably higher than for right-win-shift mistakes and
right-lose-stay mistakes (i.e., understanding mistakes).
Another interesting observation is that error rates which involved staying with an
urn were higher than error rates which involved shifting to the other urn.
Next, it was tested whether and how financial incentives affected error rates. According to
H2, participants paid with 18 cents per successful draw should have a lower error rate than
those paid with only 9 cents. Fisher‟s exact tests were applied to compare the two
conditions over all participants. However, testing revealed a significant difference in the
predicted direction only for lose-shift mistakes, p < .05. For first-draw mistakes there was a
significant effect in the other direction, p < .01. All other comparisons did not reach
statistical significance (p-values > .321). When analyzing individual error rates using
Mann-Whitney U tests, the two incentive conditions did not differ significantly concerning
any type of error (p-values > .293).
As in the study by Charness and Levin (2005), participants‟ behavior did not change
considerably over the course of the sessions (see Table 4 and Figure 7). It seems that winstay mistakes became a bit more frequent over time, while the rate of lose-shift mistakes
and right-win-shift mistakes dropped somewhat during the last trials of the experiment.
Table 4
Error frequencies depending on trial number
Trial Number
1-15
16-30
31-45
46-60
-
-
29.7 %
31.7 %
Right-Win-Shift Mistakes
6.3 %
9.3 %
2.9 %
1.7 %
Right-Lose-Stay Mistakes
15.3 %
7.0 %
10.9 %
11.9 %
Win-Stay Mistakes
63.4 %
60.0 %
64.3 %
75.7 %
Lose-Shift Mistakes
56.9 %
50.0 %
54.2 %
44.8 %
First-Draw Mistakes
Study 1a
56
100
First-Draw Mistakes
Right-Win-Shift Mistakes
Right-Lose-Stay Mistakes
Win-Stay Mistakes
Lose-Shift Mistakes
Error Frequency in %
90
80
70
60
50
40
30
20
10
0
1-15
16-30
31-45
46-60
Trial Number
Figure 7. Error frequencies depending on trial number.
Charness and Levin stated that “[t]here is considerable heterogeneity in behavior across
individuals“ (2005, p. 1307). This can also be seen from the individual error rates of the
participants in the present study (see Figure 8). Error rates ranged from zero to over sixty
percent, but due to the enormous variance (M = 29.11, SD = 20.19) there were no outliers
or extreme values.
Number of Participants
7
High Incentives
Low Incentives
6
5
4
3
2
1
0-10
10-20
20-30
30-40
40-50
50-60
Individual Error Rate in %
Figure 8. Individual heterogeneity regarding error rates.
60-70
Study 1a
57
FRN in Response to First-Draw Feedback
The grand-average ERP waveforms (n = 40) from channel FCz, as a function of
reward valence of the outcome of the first draw, are shown in Figure 9A. A substantial
FRN with a frontocentral scalp topography was evident which tended to be more
pronounced for negative feedback compared to positive feedback. The corresponding
topographies of the FRN are shown in Figure 9B.
A
B
-10
Negative Feedback
Positive Feedback
FCz
6 µV
Amplitude in µV
-5
FRN
0
0
5
10
-6
15
-250
0
250
500
750
Negative Feedback
Positive Feedback
Time in ms
Figure 9. (A) Grand-average FRN as a function of valence of first-draw feedback. (B)
Topographical mapping of the FRN as a function of valence of first-draw feedback.
Using a median split procedure (separately for the two incentive conditions), participants
were classified as having either a low or high rate of reinforcement mistakes16. In order to
test H4, the values of the amplitude of the FRN in response to the first feedback were
subjected to an analysis of variance (ANOVA) using the within-subjects factor valence of
feedback (negative vs. positive) and the between-subjects factors magnitude of incentives
(low vs. high) and rate of reinforcement mistakes (low vs. high). This ANOVA revealed a
significant main effect of rate of reinforcement mistakes, F(1, 36) = 4.94, p < .05, η² = .12.
The within-factor valence of feedback did not reach significance, F(1, 36) = 2.27, p = .14,
η² = .06, nor did the between-factor magnitude of incentives, F(1, 36) = 2.56, p = .12, η² =
.07. The two-way interactions with valence of feedback were not significant (valence of
feedback x rate of reinforcement mistakes: F < 1; valence of feedback x magnitude of
incentives: F < 1), but the two-way interaction between rate of reinforcement mistakes and
magnitude of incentives reached significance, F(1, 36) = 5.53, p < .05, η² = .13). The three-
16
In the low-incentive group, median percentage of reinforcement errors was 59.4 %, as opposed to 66.7 % in
the group paid with high incentives.
Study 1a
58
way interaction valence of feedback x rate of reinforcement mistakes x magnitude of
incentives was marginally significant, F(1, 36) = 2.84, p = .10, η² = .07.
To further analyze this three-way interaction, separate ANOVAs were conducted
for the two incentive conditions. For the group paid with low incentives, a 2 within
(valence of feedback: negative vs. positive) x 2 between (rate of reinforcement mistakes:
low vs. high) ANOVA revealed no significant effects (F‟s < 1). For participants paid with
high incentives, there was a significant main effect of rate of reinforcement mistakes, F(1,
18) = 8.78, p < .01, η² = .33. The main effect of valence of feedback was not significant,
F(1, 18) = 1.81, p = .20, η² = .09. The interaction between valence of feedback and rate of
reinforcement mistakes did not reach significance either, F(1, 18) = 2.14, p = .16, η² = .11.
However, t-tests for unpaired samples within the high-incentive group showed that
only for first-draw negative feedback, participants with a low rate of reinforcement
mistakes (M = 2.65, SD = 1.98) had a significantly smaller FRN amplitude than
participants with a high rate of reinforcement mistakes (M = 8.80, SD = 7.42), t(7.671) =
2.29, p < .05 (one-sided). For positive feedback, the difference between participants with a
low rate of reinforcement mistakes (M = 2.80, SD = 2.50) and participants with a high rate
of such mistakes (M = 5.19, SD = 4.57) did not reach significance, t(18) = 1.51, p = .15.
In Figures 10 to 13, all grand-average waveforms are plotted with the
corresponding topographies of the FRN at FCz, depending on magnitude of incentives and
rate of reinforcement mistakes. Table 5 presents FRN amplitude values at FCz.
Study 1a
59
A
Low Incentives (n = 20)
-10
High Incentives (n = 20)
Negative Feedback
Positive Feedback
FCz
-10
-5
Amplitude in µV
Amplitude in µV
-5
0
5
10
0
5
10
15
15
-250
0
250
500
750
-250
Time in ms
0
250
500
750
Time in ms
B
6 µV
0
-6
Negative Feedback
Positive Feedback
Negative Feedback
Positive Feedback
Figure 10. (A) Grand-average FRN as a function of magnitude of incentives and valence of firstdraw feedback. (B) Topographical mapping of the FRN as a function of magnitude of incentives
and valence of first-draw feedback.
Study 1a
60
A
Low Rate of ReinforcementMistakes (n = 22)
-10
High Rate of ReinforcementMistakes (n = 18)
Negative Feedback
Positive Feedback
FCz
-10
-5
Amplitude in µV
Amplitude in µV
-5
0
5
10
0
5
10
15
15
-250
0
250
500
750
-250
Time in ms
0
250
500
750
Time in ms
B
6 µV
0
-6
Negative Feedback
Positive Feedback
Negative Feedback
Positive Feedback
Figure 11. (A) Grand-average FRN as a function of rate of reinforcement mistakes and valence of
first-draw feedback. (B) Topographical mapping of the FRN as a function of rate of reinforcement
mistakes and valence of first-draw feedback.
Study 1a
61
Low Incentives
A
Low Rate of ReinforcementMistakes (n = 10)
-10
High Rate of ReinforcementMistakes (n = 10)
Negative Feedback
Positive Feedback
FCz
-10
-5
Amplitude in µV
Amplitude in µV
-5
0
5
10
0
5
10
15
15
-250
0
250
500
750
-250
Time in ms
0
250
500
750
Time in ms
B
6 µV
0
-6
Negative Feedback
Positive Feedback
Negative Feedback
Positive Feedback
Figure 12. (A) Grand-average FRN in the low-incentive condition as a function of rate of
reinforcement mistakes and valence of first-draw feedback. (B) Topographical mapping of the FRN
in the low-incentive condition as a function of rate of reinforcement mistakes and valence of firstdraw feedback.
Study 1a
62
High Incentives
A
Low Rate of ReinforcementMistakes (n = 12)
-10
High Rate of ReinforcementMistakes (n = 8)
Negative Feedback
Positive Feedback
FCz
-10
-5
Amplitude in µV
Amplitude in µV
-5
0
5
10
0
5
10
15
15
-250
0
250
500
750
-250
Time in ms
0
250
500
750
Time in ms
B
6 µV
0
-6
Negative Feedback
Positive Feedback
Negative Feedback
Positive Feedback
Figure 13. (A) Grand-average FRN in the high-incentive condition as a function of rate of
reinforcement mistakes and valence of first-draw feedback. (B) Topographical mapping of the FRN
in the high-incentive condition as a function of rate of reinforcement mistakes and valence of firstdraw feedback.
Study 1a
63
Table 5
Mean peak amplitude (grand-average) of the FRN as a function of magnitude of incentives, rate of
reinforcement mistakes, and valence of first-draw feedback
Valence of Feedback
Magnitude of
Incentives
Rate of
Reinforcement
Mistakes
Low
High
Total
Negative
M (SD)
Positive
M (SD)
Low
3.96 (3.24)
2.89 (3.27)
High
3.25 (2.69)
3.36 (3.23)
Total
3.61 (2.92)
3.13 (3.17)
Low
2.65 (1.98)
2.80 (2.50)
High
8.80 (7.42)
5.19 (4.57)
Total
5.11 (5.67)
3.76 (3.57)
Low
3.25 (2.64)
2.84 (2.81)
High
5.72 (5.88)
4.17 (3.87)
Total
4.36 (4.51)
3.44 (3.35)
Regression Analyses
The present data form a strongly balanced panel, with 60 second-draw observations
per participant. Data were analyzed by means of random-effects probit regressions on
second-draw decisions (error vs. correct decision), assuming non-independence of
observations within participants.
Table 6 presents the results of the regression analyses. Contrary to expectations
(H2), yet consistent with the results of the Fisher‟s tests and Mann-Whitney U tests
reported above, the manipulation of incentives did not have a significant effect. Similarly,
the opportunity costs of errors did not significantly affect the likelihood of an error‟s
occurrence. The counterbalance dummy was also not significant which can be regarded as
a validation of the basic design. As postulated in H1, the existence of a decision conflict
between Bayesian updating and the reinforcement heuristic (i.e., a first draw from the left
urn) made the occurrence of an error much more likely. Time (trial number divided by 60)
had a highly significant negative effect on the probability of an error. Neither the dummy
variable for a forced vs. a free first draw, nor the dummy capturing success vs. no success
on the first draw, were significant.
Study 1a
64
Concerning personal characteristics of the participants, a gender effect was found
such that errors were more likely for female participants. When controlling for knowledge
in statistics, this effect was reduced, but did not disappear. In line with expectations (H3),
self-assessed knowledge in statistics was a highly significant negative predictor of an
error‟s likelihood. As expected (H3), a dummy representing the interaction between faith
in intuition and conflict was significant, which means that a high level of faith in intuition
made errors more likely in situations in which Bayesian updating and the reinforcement
heuristic were opposed (i.e., made reinforcement errors more likely). In the same way, a
large FRN amplitude in response to first-draw negative feedback increased the probability
of a reinforcement error (H4).
Study 1a
65
Table 6
Random-effects probit regressions on second-draw errors (0=correct choice, 1=error)
Variable
Probit 1
Probit 2
Probit 3
Incentives (1 = High)
0.057
(0.312)
0.242
(0.286)
-0.639***
(0.245)
1.936***
(0.247)
-0.179
(0.143)
0.006
(0.077)
0.035
(0.071)
-0.197
(0.284)
0.099
(0.249)
-0.657***
(0.246)
1.166***
(0.436)
-0.195
(0.143)
0.018
(0.077)
0.033
(0.071)
-0.271
(0.285)
0.094
(0.246)
-0.652***
(0.246)
1.063**
(0.438)
-0.188
(0.143)
0.054
(0.080)
0.034
(0.071)
-0.585**
(0.288)
-0.461*
(0.249)
-0.165***
(0.049)
0.140**
(0.068)
-0.422*
(0.248)
-0.159***
(0.048)
0.155**
(0.068)
Counterbalance
Time
Conflict (1 = Yes)
Forced (1 = Yes)
Opportunity Cost
Result of First Draw (1 = Success)
Personal Characteristics
Gender (1 = Male)
Knowledge In Statistics
Faith In Intuition17,18 x Conflict
Electrocortical Responses
FRN (Neg FB1)19,20 x Conflict
Constant
log likelihood
Wald test
0.265*
(0.157)
-1.191***
(0.437)
-863.252
633.96***
-0.421
(0.469)
-855.594
638.97***
-0.584
(0.476)
-854.148
642.37***
Notes. Number of observations = 2400 (40x60). Numbers in brackets are standard errors.
*p < .10. **p < .05. ***p < .001
17
Internal consistency as measured by Cronbach‟s alpha was satisfactory (alpha = .74).
An alternative model in which faith in intuition was included as a separate predictor variable was not
better.
19
To make this measure more comparable to the other variables, FRN amplitude was multiplied by a factor
of 10.
20
An alternative model in which FRN amplitude was included as a separate predictor variable was not better.
18
Study 1a
66
2.1.4 Discussion
By means of an experimental binary-action belief-updating paradigm (see Charness
& Levin, 2005; see also Achtziger & Alós-Ferrer, 2010), the conflict between heuristic and
rational decision strategies was examined. Particular focus was on participants‟ error rates
and on the feedback-related negativity (FRN). Additionally, several relevant person
variables were assessed, e.g., faith in intuition and skills in statistics.
The following issues were investigated in the present study: (1) Do switching-error
rates differ between situations when Bayesian updating and the reinforcement heuristic are
aligned (after right draws) and situations when these are opposed (after left draws)? (2) Do
higher monetary incentives, manipulated between participants, reduce error rates? (3) Are
skills in statistics and faith in intuition related to error rates? (4) Does the magnitude of the
FRN amplitude after a negative outcome of the first draw positively correlate with the rate
of reinforcement mistakes?
Overall, both the error rates and the FRN were revealing of different strategies
which participants employed when confronted with new information that could be used as
a basis for later economic decisions. Error rates were very low when Bayesian updating
and the reinforcement heuristic were aligned. By contrast, participants made much more
decision errors in those situations in which the two strategies were opposed. ERP results
showed that (provided that monetary incentives were not too low) the reinforcement
behavior of participants showed a significant correlation with the magnitude of the FRN
amplitude. The greater the extent to which participants employed heuristic decision
strategies, the more pronounced was the amplitude of the FRN in response to first-draw
negative feedback. In addition, faith in intuition and skills in statistics were correlated with
participants‟ decision behavior. Contrary to predictions, there was no effect of monetary
incentives (manipulated between participants) on error rates. Likewise, there was no effect
of opportunity costs on error frequency within participants. However, the magnitude of
incentives moderated the relationship between the rate of reinforcement errors and FRN
amplitude.
On the whole, the results support the basic hypothesis that decisions in this
paradigm can be understood as the result of the interaction between two processes: a rather
automatic reinforcement process corresponding to a heuristic which ignores prior
probabilities and a more controlled, deliberative process of Bayesian updating which can
be considered rational in an economic sense.
Study 1a
67
On average, according to self-reports, participants had a very good understanding of the
rules of the decision task, invested a lot of effort in accomplishing the experiment properly,
and considered the task rather simple.
Consistent with the results of Charness and Levin (2005), participants‟ decision
behavior did not change considerably over the course of the experiment, as apparent from
the inspection of error rates for different sections of the task. However, the more finegrained regression analyses revealed a significant negative effect of trial number on the
likelihood of an error. This means that there was some tendency for error rates to drop with
increasing time and experience. The finding of a learning effect is remarkable because one
might assume that the probabilistic nature of the experiment and the limited number of
trials made it difficult to link outcome feedback with the effectiveness of one‟s decision
strategy.
As evident from descriptive statistics and corroborated by the regression analyses, error
rates for win-stay mistakes and lose-shift mistakes (i.e., reinforcement mistakes) were
considerably higher than for right-win-shift mistakes and right-lose-stay mistakes (i.e.,
understanding mistakes). This is perfectly in line with the findings of Charness and Levin
(2005) and Achtziger and Alós-Ferrer (2010) and confirms H1. As expected, participants
committed few mistakes when the controlled process of Bayesian updating and the rather
automatic process of reinforcement learning were aligned (after draws from the right urn).
In contrast, many decision errors occurred after first draws from the left urn, i.e., in
situations in which Bayesian updating and the reinforcement heuristic were opposed. This
can be explained by the fact that such situations create a decision conflict between a rather
automatic and a more controlled process. The results are remarkably congruent with the
qualitative predictions of dual-process theories (see Evans, 2008; Sanfey & Chang, 2008;
Weber & Johnson, 2009). The reinforcement heuristic is regarded as a rather automatic
process since it is assumed to be immediately and unconsciously triggered by the
emotional attachment of feedback (valence). Affective valence (e.g., Bargh, 1997;
Cacioppo, Priester, and Berntson, 1993; Slovic et al., 2002; Zajonc, 1998) is always likely
to represent an attribute for a heuristic strategy and hence guide decision making because it
is routinely evaluated as part of perception and, therefore, is always accessible (Kahneman
& Frederick, 2002). This process may be termed “associative intuition” (Glöckner &
Witteman, 2010a). One reason why automatic processes often lead to deviations from
rational behavior is that they consider all salient information encountered in the
Study 1a
68
environment, regardless of how relevant this information is for upcoming decisions
(Glöckner & Betsch, 2008b). For example, the fact that a winning ball drawn from the left
or right urn is a successful outcome and earns some amount of money is not relevant for
the calculation of posterior probabilities. Still, this fact is taken into consideration by
people relying on the reinforcement heuristic. The corresponding response tendency may
conflict with more controlled processes, thus resulting in an increased probability of errors;
or it may deliver the correct decision, resulting in a decreased probability of decision
errors.
An interesting observation is that for reinforcement mistakes, as well as for
understanding mistakes, error rates which involved staying with an urn were higher than
error rates which involved shifting to the other urn. The same observation (for
understanding mistakes) was made by Achtziger and Alós-Ferrer (2010). This suggests the
existence of decision inertia, a bias that predominantly favors a decision to maintain one‟s
course of action as opposed to switching to a one (see Westfall, 2010). In some ways
related to this, Ariely, Loewenstein, and Prelec (2003) proposed a model in which people
use previous decisions as an input for later decisions. According to the model, past
behavior is used as a signal of preferences and consequently results in repeating previous
decisions (see also Ariely & Norton, 2008; for recent empirical evidence see for example
Sharot, Velasquez, & Dolan, 2010).
The finding of Achtziger and Alós-Ferrer (2010) that the likelihood of an error is
increased after a first forced draw, as well as after negative feedback on the first draw,
could not be replicated. Neither the dummy variable for a forced vs. free first draw, nor the
dummy capturing success vs. no success on the first draw, were significant. One can only
speculate why these results were not replicated here, and might refer to the different setting
(undergoing EEG recording while making decisions).
H2 could not be confirmed by the data. The between-manipulation of financial incentives
did not lead to the expected effects. Participants who received 18 cents per successful draw
in general did not commit fewer mistakes than participants who received 9 cents per
successful outcome. When analyzing the aggregated participant data, there was an effect in
the predicted direction only for lose-shift mistakes. For first-draw mistakes, there was even
a significant effect in the other direction, with a lower error rate for participants in the lowincentive condition. Further, Mann-Whitney U tests for the comparison of individual error
rates did not reveal any significant effect in the predicted direction. The random-effects
Study 1a
69
probit regression analyses did not show a significant correlation between incentive
condition and error frequencies, nor was there any effect of opportunity costs on error
likelihood. The latter is inconsistent with Charness and Levin (2005) and Achtziger and
Alós-Ferrer (2010) who detected a negative relationship between the frequency and the
cost of an error. In contrast, results of the present study suggest that within participants,
error rates were rather independent of error costs. One possible explanation for the
different finding could be that these earlier studies used considerably larger samples (more
than three times as many participants) as well as more treatment conditions, resulting in a
higher number of different opportunity costs.
When interpreting the null-finding of a between-effect of incentive manipulation,
one important caveat is that, in spite of random assignment, the two conditions differed
regarding knowledge in statistics. Compared to participants in the high-incentive condition,
more participants in the low-incentive condition were familiar with similar decision
scenarios and majored in a subject with an obligatory statistics course. This biased
distribution of participants could cause a lack of evidence in favor of an effect of the
incentive manipulation on error frequencies. On the other hand, the null-finding of a
between-manipulation is not only consistent with the findings by Achtziger and AlósFerrer (2010), but is also quite consistent with the literature on the effect of incentives
manipulated between participants. Camerer and Hogarth (1999) reviewed 74 experiments
with high, low, or no monetary incentives, and concluded that the most common result is
no effect on mean performance (see also Bonner, Hastie, Sprinkle, & Young, 2000;
Jenkins, Mitra, Gupta, & Shaw, 1998). They offer several possible explanations for why
high financial incentives can fail to increase decision performance, amongst them being
that increased incentives might not result in increased effort if the perceived marginal
return to effort is low. This is the case for very easy or very difficult tasks (see also Brase,
2009). The present task could be considered difficult in a broader sense as participants
have to take into account different probabilities and, at the same time, are led astray by
reinforcement experiences21. Furthermore, Camerer and Hogarth (1999) state that even if
more effort is motivated by external incentives, this increased effort does not necessarily
increase performance if people are supposed to spontaneously apply principles of rational
choice (e.g., the Bayes‟ rule) – which is exactly the case in the present task. Consistent
with this, Bonner et al. (2000) found frequent positive effects of incentives in domains
where little skill is required (e.g., pain endurance, vigilance or detection), but much weaker
21
However, this argument is somewhat at odds with the fact that participants rated the task as relatively
simple (see above).
Study 1a
70
evidence for positive effects in judgment or choice tasks. Camerer and Hogarth stress:
“There is no replicated study in which a theory of rational choice was rejected at low
stakes in favor of a well-specified behavioral alternative, and accepted at high stakes.”
(1999, p. 33). In general, the probability that incentives positively affect performance
seems to be negatively linked to task complexity (Bonner & Sprinkle, 2002). For instance,
Zizzo et al. (2000) found that for only those simplified problems with reduced complexity,
monetary incentives plus feedback were able to significantly improve participants‟
performance. The task can be considered complex since participants must take into account
different probabilities which vary with the environmental condition, which itself must be
inferred by the result of the first draw. Additionally, in some ways the participants are led
astray by the reinforcement heuristic.
One might also conjecture that the difference of 9 cents between the two incentive
conditions was not large enough to elicit differential decision behavior. After all, the
amount of money participants expected to win before the start of the experiment was only
marginally higher in the 18 cents group as compared to the 9 cents group.
As predicted in H3, personality variables had a considerable influence on decision
behavior and thus on error rates. This corresponds to the huge interindividual heterogeneity
seen in the error rates. Individual error rates ranged from zero percent to over sixty percent,
without any statistical outliers or extreme values. Every error rate range was well
represented, a result consistent with the findings of Charness and Levin (2005).
In line with expectations, results of the regression analyses showed that
participants‟ skills in statistics significantly affected their decision performance. The
higher the statistical skills participants reported as having, the lower was their likelihood of
commiting a decision error. This finding also suggests that participants were quite good at
assessing their own level of knowledge in statistics. Actually, it is not surprising that
people who have not been taught the application of controlled strategies (i.e., the Bayes‟
rule) and/or who are not very skilled at using these strategies have a higher risk of being
led astray by heuristics. This finding is consistent with the results of Achtziger and AlósFerrer (2010). Yet these results contradict the findings of Tversky and Kahneman (1971,
1974; Kahneman & Tversky, 1973; see also Kahneman et al., 1982) which show that
people with extensive training in statistics had the same intuitions as those without and
were therefore prone to similar biases. In the same way, Lindeman, van den Brink, and
Hoogstraten (1988) gave corrective feedback on participants‟ solutions to problems like
Study 1a
71
those used by Kahneman and Tversky (1973) and found no effect in a later test phase.
Ouwersloot et al. (1998) also concluded that previous knowledge of statistics had no
influence on participants‟ probability updating behavior. Awasthi and Pratt (1990) also
found that people who took additional quantitative statistics courses were not better at
applying decision rules in tasks pertaining to conjunction probability, sample size, and
sunk cost (for a review of related findings, see also Sedlmeier & Gigerenzer, 2001; but see
for example Fong, Krantz, & Nisbett, 1986; Kosonen & Winne, 1995; Lehman, Lempert,
& Nisbett, 1988 for different findings). One possible explanation for the different findings
is that in the studies which found no effect of training in statistics, participants were
presented with concrete decision scenarios from everyday life which were framed in a
particular way; in contrast, the present paradigm was reduced to mere statistical aspects
and was therefore very abstract. Drawing balls from urns with particular probabilities is not
a scenario in which every person will necessarily be involved with; however, it is a
scenario which some students have learned about at some point or another. Perhaps those
students who have been taught these principles and who were also skilled at applying them
recognized these decision scenarios and drew upon their pre-existing knowledge.
Also consistent with expectations, the regression analyses revealed that the concept
faith in intuition was highly related to error rates in conflict situations, i.e., to rates of
reinforcement mistakes. Faith in intuition reflects engagement and trust in one‟s own
intuition and in the experiential system (Epstein et al., 1996) and is further characterized by
heuristic processing as described by Kahneman et al. (1982). The reinforcement heuristic
can be regarded as an intuitive, heuristic strategy as opposed to a reflective strategy
(“associative intuition”, see Glöckner & Witteman, 2010a). It is the positive or negative
affect attached to the feedback on an action that is used to guide further actions (e.g.,
Bechara et al., 1997) without any deliberate considerations or calculations. Unconscious,
affective reactions towards decision options that have been learned implicitly lead to the
activation of the previously successful behavioral option or inhibit the activation of the
previously unsuccessful option (Glöckner & Witteman, 2010a). The underlying
assumption that positive and negative valence was attached to feedback was confirmed in
the present study by the finding that participants who were paid for a certain color gave
significantly more positive ratings than participants who were not paid for this color. This
affective attachment might have activated the previously successful behavioral option, or it
might have inhibited the previously unsuccessful option. One may argue that people high
in faith in intuition trusted in these gut feelings and did not try to suppress this automatic
Study 1a
72
process. This proposition is in line with empirical evidence that intuitive decision makers
rely stronger on implicit knowledge (Betsch, 2007; Richetin et al., 2007; Woolhouse &
Bayne, 2000) and on feelings (Schunk & Betsch, 2006).
Charness and Levin (2005) as well as Achtziger and Alós-Ferrer (2010) found that
women tended to deviate from the Bayes‟ rule more often than men. In the present study
the same finding was observed. The fact that this effect was somewhat reduced by
controlling for skills in statistics suggests that it was at least partly based on gender
differences in statistical knowledge. One might suppose that the differences in error rates
between women and men reported in other studies which used belief updating tasks (e.g.,
Charness & Levin, 2005; see Achtziger & Alós-Ferrer, 2010) may have partly resulted
from the fact that gender was confounded with several other person variables, such as
skills in statistics. In our society, gender is confounded with a lot of other variables which
have to be taken into account before final conclusions about a gender effect can be drawn
(Denmark, Russo, Friese, & Sechzer, 1988). It is possible that not all relevant variables
were controlled for in the present study. Another possible explanation for the observed
gender difference is the phenomenon of stereotype threat. Stereotype threat refers to the
fear of confirming a negative stereotype about one‟s group which results in poorer
performance (Steele & Aronson, 1995). Stereotypes about gender differences in thinking
styles and mathematical abilities are widely held in western society (e.g., Gilligan, 1982;
Tomlinson-Keasey, 1994). While the male gender is associated with rational and logical
thinking and thus better mathematical skills, the female gender is generally associated with
intuitive, feeling-based, and sometimes irrational thinking. Lots of studies have shown that
stereotype threat exists for women concerning mathematical abilities (e.g., Brown &
Josephs, 1999; Davies, Spencer, Quinn, & Gerhardstein, 2002; Inzlicht & Ben-Zeev, 2000;
Spencer, Steele, & Quinn, 1999). This effect can be activated very easily. It is conceivable
that the paradigm of drawing balls from urns with particular prior probabilities had such a
mathematical connotation that it was sufficient to make the group stereotype salient and
elicit stereotype threat for female participants.
H4, which postulated a positive relationship between FRN amplitude and reinforcement
behavior of participants, could be mostly confirmed. Repeated-measures ANOVAs
revealed that a high rate of reinforcement mistakes, as compared to a low rate of such
mistakes, was associated with a more pronounced FRN amplitude in response to first-draw
negative feedback. In other words, compared to those with a high rate of reinforcement
Study 1a
73
errors, participants who updated probabilities when making their decisions exhibited a less
intense FRN in response to outcome feedback. Taking the opposite approach, the
regression analyses revealed that the likelihood of reinforcement errors was significantly
predicted by the amplitude of the FRN in response to first-draw negative feedback. These
results support the assumption that the amplitude of the FRN reflects processes of reward
learning or, more precisely, can be seen as an indicator of reinforcement learning (Holroyd
& Coles, 2002; Holroyd et al., 2002; Holroyd et al., 2003; Holroyd et al., 2005). This
process can be identified as automatic because it occurs in a time-window of less than
300 ms; thus, it occurs much earlier than controlled information processing (e.g., Devine,
1989; Chwilla et al., 2000). The present findings also fit with empirical evidence
suggesting that prediction errors signal the need to adjust behavior (Ridderinkhof,
Ullsperger, et al., 2004; Ridderinkhof, van den Wildenberg, et al., 2004). For example,
Cohen and Ranganath (2007) found that, across participants, FRN amplitudes elicited by
negative feedback in a simple “matching pennies” game were more pronounced when
participants chose the opposite versus the same target on the subsequent trial. Yasuda et al.
(2004) also found a significant association between FRN amplitude and rate of response
switching in the subsequent trial. Related to this, Cavanagh et al. (2010) could show that
single-trial theta oscillatory activity following feedback was correlated with subsequent
behavioral adaptation concerning response times. Using a reversal learning task,
Philiastides et al. (2010) demonstrated that single-trial changes in the EEG after positive
and negative feedback predicted behavioral choices from one trial to the next. And Van der
Helden et al. (2010) found that the FRN amplitude associated with a mistake on a
particular item was predictive of whether such a mistake was avoided the next time this
item was encountered.
The finding of interindividual heterogeneity in reinforcement learning processes
corroborates evidence that accounting for interindividual differences is crucial for
understanding the behavioral and neural correlates of reinforcement learning (e.g., Cohen,
2007; Cohen & Ranganath, 2005).
It seems that the amplitude of the FRN reflects a physiological predisposition for
reinforcement learning, given that incentives are not too low. Animal research
(Barraclough, Conroy, & Lee, 2004; Lee, McGreevy, & Barraclough, 2005) has found that
reinforcement learning parameters are highly stable over many testing sessions, suggesting
that these parameters reflect stable individual differences. One might conjecture that the
interindividual differences observed in the present study are also based on stable traits.
Study 1a
74
Participants who differed in FRN amplitudes to negative feedback and associated
reinforcement learning biases may have underlying genetic differences in dopamine levels.
Animal research suggests that the midbrain dopamine system is especially involved in
reinforcement learning (Berns et al., 2001; Daw & Doya, 2006; Montague et al., 2004;
Pagnoni et al., 2002; Schultz, 2002, 2004; Schultz, Dayan, & Montague, 1997). According
to the Reinforcement Learning Theory of the FRN (Holroyd & Coles, 2002; Holroyd et al.,
2002; Holroyd et al., 2003; Holroyd et al., 2005), the amplitude of the FRN is determined
by decreasing activity of midbrain dopaminergic neurons. Research has shown that there
are considerable individual differences in the amount of prefrontal and striatal dopamine,
which are based on a specific gene, namely the COMT (catechol-O-methyltransferase)
gene. COMT is an enzyme that breaks down dopamine and norepinephrine and is the
primary mechanism by which the dopamine catabolism is regulated in the prefrontal
cortex. Individuals homozygous for the low-activity met allele have lower COMT enzyme
activity and therefore higher prefrontal dopamine compared to individuals homozygous for
the high-activity val allele (e.g., Egan et al., 2001; Tunbridge, Bannerman, Sharp, &
Harrison, 2004). Due to an inverse relationship between tonic prefrontal dopaminergic
activity and phasic dopaminergic activity in the striatum (Bilder, Volavka, Lachman, &
Grace, 2004), val carriers are assumed to have higher striatal levels of dopamine (e.g., Akil
et al., 2003; Meyer-Lindenberg et al., 2005) which are crucial for reinforcement learning
(e.g., O‟Doherty et al., 2004; Schönberg, Daw, Joel, & O‟Doherty, 2007). Krugel, Biele,
Mohr, Li, and Heekeren (2009) recently demonstrated that individuals homozygous for the
val allele performed better in a reward-based learning task that required adaptation of
decisions to dynamically changing reward contingencies. Marco-Pallarés et al. (2009)
could show that the FRN of participants homozygous for the val allele was more
pronounced than the FRN of met carriers in a simple gambling task.
The observed relationship between FRN amplitude and rate of reinforcement errors
might be related to the results of several studies which suggest that both too-large and toosmall FRN amplitudes reflect maladaptive feedback processing (e.g., schizophrenia:
Morris, Heerey, Gold, & Holroyd, 2008; alcoholism: Fein & Chang, 2008; attention-deficit
hyperactivity disorder: van Meel, Oosterlaan, Heslenfeld, & Sergeant, 2005; depression:
Santesso et al., 2008). Onoda et al. (2010) found a significant negative correlation between
one‟s impulsivity (nonplanning) with FRN amplitude. The authors argue that impulsive
individuals show too little evaluation processing for negative feedback. It can be argued
that in the present study participants with a high rate of reinforcement errors exhibit too
Study 1a
75
intense evaluation processing for negative feedback, which goes along with a strong FRN
amplitude22.
But although the dummy representing the interaction between FRN amplitude and
conflict was significant in the regression analyses over all participants, the repeatedmeasures ANOVAs on FRN amplitude in response to the first feedback revealed somewhat
different results. The relationship between reinforcement behavior and FRN amplitude
could only be detected in participants paid with a high amount of money (18 cents) per
successful draw, and not in participants who were offered lower monetary incentives (9
cents). While the FRN has been found to be insensitive to the magnitude of rewards in
several studies using within-subjects designs (e.g., Hajcak et al., 2006; Holroyd et al.,
2006; Marco-Pallarés et al., 2008; Sato et al., 2005; Yeung & Sanfey, 2004), there have
been recent studies with different findings: Goyer et al. (2008), Marco-Pallarés et al.
(2009), Wu and Zhou (2009), Onoda et al. (2010), as well as Bellebaum et al. (2010) found
that the magnitude of a loss significantly affected FRN amplitudes. This is consistent with
fMRI research in humans indicating that the ACC is sensitive to the magnitude of reward
(e.g., Knutson et al., 2005; Rolls, McCabe, & Redoute, 2008). Goyer et al. (2008)
concluded that the variance in relative FRN amplitude was explained best by including, as
predictors, both the magnitude and the valence of both the chosen and unchosen outcome.
When incentives were very low in the present study, the amplitude of the FRN did
not vary between different predispositions for reinforcement learning. With low incentives,
there was generally only a weak FRN response to negative feedback. This might constitute
a floor effect in which interindividual differences are not visible anymore. Still, this
finding is surprising since the between-factor magnitude of incentives did not affect error
rates so that, based on the behavioral data alone, it might be concluded that the
manipulation of financial incentives did not matter. This is one of the reasons why it makes
sense to record participants‟ ERPs during economic decision making and not only their
decision outcomes (i.e., error rates).
The moderator effect of incentive condition implies that the FRN is modulated by
the motivational significance of feedback. Such a conclusion has also been drawn by
Masaki et al. (2006) based on their own results and on the findings of Takasawa, Takino,
and Yamazaki (1990) and Holroyd et al. (2003). Yeung et al. (2005) found a positive
correlation between FRN amplitude and the subjective rating of involvement in the
gambling task. These findings correspond to research on the ERN (see also 1.4.4), an ERP
22
This strong processing is not generally maladaptive, but it is maladaptive in the present task after left
draws where reinforcement learning is opposed to Bayesian updating.
Study 1a
76
which is closely related to the FRN and which has also been linked to the estimation of
motivational value of events (e.g., Bush et al., 2000; Hajcak & Foti, 2008; Hajcak, Moser,
Yeung, & Simons, 2005; Pailing & Segalowitz, 2004).
So one might conclude that the magnitude of monetary incentives was discerned by
the brain‟s feedback monitoring system but, nevertheless, it did not affect performance in
terms of error rates, probably due to the particular characteristics of the decision task (see
above).
The question arises whether the paradigm used in the present study is rather artifical,
compared to natural economic environments. This question is also raised by Charness and
Levin (2005). The present paradigm is conceptualized in a way that after draws from the
left urn, the rather automatic, intuitive reinforcement strategy and the more controlled,
rational strategy of Bayesian updating are always opposed. Therefore, intuitive decision
making is always detrimental. Most likely a situation such as this is relatively rare in realworld economic settings. As previously stated (see 1.1.1), reliance on automatic or
intuitive processes often leads to very good results or even outperforms deliberate
reasoning (e.g., Frank et al., 2006; Glöckner, 2008; Glöckner & Betsch, 2008a, 2008b,
2008c; Glöckner & Dickert, 2008; Hammond et al., 1987; Klein, 1993). The quality of
intuitive decisions strongly depends on circumstances and task characteristics (for a review
of findings, see Plessner & Czenna, 2008). Glöckner and Witteman (2010b) predict that
intuition will lead to good choices if the information on which decisions are based is
sufficient, representative, and not biased (e.g., by task properties). Hogarth (2001) also
emphasizes the importance of unbiased feedback for the quality of intuitive decisions. It
might be argued that the present task constitutes a somewhat biased setting (e.g., the
distribution of balls in the two urns leads to the fact that the reinforcement heuristic and
Bayesian updating are directly opposed after left draws). Most natural situations probably
offer comparatively unbiased information so that reliance on a reinforcement strategy is
also effective. Nevertheless, results of the present study indicate that both strategies,
Bayesian updating and the reinforcement heuristic, are relevant and, that when they
conflict, people often deviate from utility-maximizing behavior.
In summary, findings of Study 1a support the relevance of the dual-process approach to
economic decision making. On the one hand, there is a rational strategy based on an
efficient (Bayesian) use of information in order to maximize utility. On the other hand,
Study 1a
77
basic reinforcement processes also play a role in actual human decision making. According
to a dual-process perspective, economic rationality is related to a set of controlled
processes and boundedly rational behavior is linked to a set of rather automatic or intuitive
processes.
Beyond this, the present findings shed some light on the issue of interindividual
heterogeneity in economic decision making. Results strongly suggest that people interpret
and use reinforcements in different ways. While good statistical skills increase the
probability that a participant relies on the controlled processes of Bayesian updating, trust
in one‟s own intuitive skills increases the probability of deciding according to (nonoptimal) heuristics. There seems to be a physiological predisposition to decide in line with
reinforcement processes which is reflected in the amplitude of the FRN. This does not
occur when monetary incentives for a successful decision outcome are too low. In other
words, the FRN seems to require some kind of incentive in order to indicate the
predisposition for reinforcement learning in the present study.
Study 1b
2.2
78
Study 1b: Bayesian Updating and Reinforcement Learning –
Fostering Rational Decision Making by Removing Affective
Attachment of Feedback
2.2.1 Overview
The aim of the second study was to investigate whether people are able to apply
controlled strategies (e.g., Bayesian updating) when simple heuristics (e.g., the
reinforcement heuristic) are not available. Another aim of the study was to test whether the
differentiated FRN responses which were observed in Study 1a were specific reactions to
the valence of feedback and would therefore be absent in response to feedback which is
neutral concerning valence (i.e., which is only informative in the sense of being a possible
indicator of the distribution of balls in the urns, but which does not indicate success or
failure). For these purposes, the paradigm of Study 1a was altered in a similar way as in the
study by Charness and Levin (2005):
In order to avoid any monetary and emotional reinforcement from the first draw23, a
different payment method was implemented. All first draws were forced left and unpaid;
the winning color was varied from trial to trial and was revealed only just before the
second draw. Correspondingly, feedback for the first draw did not indicate success or
failure so that the reinforcement heuristic would not apply in the same manner as in Study
1a. However, there might be a type of retroactive reinforcement due to the announcement
of the winning color for the current trial. This might induce participants to attach valence
to the feedback ex post facto (by retroactively defining the drawn ball as winning or
losing). Again, monetary incentives (doubled compared to Study 1a) were manipulated in
order to test for their effect on decision performance. EEG was recorded while participants
made their decisions and were presented feedback about the outcomes of their draws.
Dependent variables were the same as in Study 1a.
One aim of the present study was to replicate the main results of Charness and
Levin (2005; see also Charness et al., 2007), that is, a significant reduction of switching
errors resulting from the removal of emotional attachment from the first draw. Such a
finding would strongly support the conclusion that switching errors after left draws in
23
Charness and Levin (2005) refer to this emotional attachment as affect. However, affect actually refers to a
different concept. The term affect describes processes which involve negative and positive feelings such as
emotional experience or mood (see Parrott, 1996). A more adequate term in the present context would be
valence, which refers to an event‟s reinforcement character that results in a short-lived emotional attachment
(Lewin, 1939). A positive valence is often associated with a stronger approach motivation than an avoidance
motivation; for negative valence the reverse is shown to be true (see, e.g., Bargh, 1997, or Zajonc, 1998).
Study 1b
79
Study 1a (i.e., reinforcement mistakes) resulted from a conflict between Bayesian updating
and the reinforcement heuristic. Moreover, the effect of person characteristics (e.g., skills
in statistics and faith in intuition) on decision behavior was investigated as in Study 1a.
Again, high skills in statistics were expected to negatively affect the likelihood of an error.
No specific hypothesis was formulated with regard to faith in intuition. This thinking style
might no longer affect rates of reinforcement errors in the altered paradigm in which the
affective attachment of feedback is removed. It is also conceivable that after announcement
of the winning color, participants attach some valence to the feedback retroactively so that
a “win-stay, lose-shift” strategy is still applicable. Concerning the manipulation of
monetary incentives, it was predicted that participants paid with a higher amount of money
for successful decisions would apply controlled strategies to a greater extent than
participants paid with a lower amount of money. Since in the altered paradigm, participants
are not led astray by the reinforcement heuristic and thus do not face decision conflict (or
at least to a lesser degree compared to Study 1a), it is possible that the magnitude of
incentives affects performance to a higher degree than in the previous study. Moreover,
incentives were doubled compared to Study 1a, resulting in a larger difference in payment
between the two incentive conditions (9 cents in Study 1a vs. 18 cents in Study 1b).
Beyond that, the present study again investigated the processing of feedback on decision
outcomes as reflected in the FRN. It was expected that the FRN response to feedback on
the first draw (which was observed in Study 1a) would be absent in the present study as
this feedback is neutral concerning valence (i.e., it does not indicate success or failure, but
is only informative in the sense of being a possible indicator of the distribution of balls in
the urns). In contrast, the typical FRN response to negative feedback was expected for the
second draw. No particular hypothesis was proposed as to whether this FRN amplitude
would be related to individual differences in switching behavior (as in Study 1a). If
participants experience no reinforcement from the first draw at all, FRN amplitude (as an
indicator of the individual predisposition for reinforcement learning) should not be related
to decision errors in the present study. But if there is a kind of retroactive reinforcement
due to the subsequent announcement of the winning color (see above), FRN amplitude
should be more pronounced (i.e., more negative) for participants with a high rate of
reinforcement mistakes (as in Study 1a).
Study 1b
80
Thus, the hypotheses of Study 1b were the following:
H1: Removing reinforcement characteristics from the first draw (by not paying for
its outcome and not associating it with success or failure) reduces the rate of switching
errors: The rates of win-stay mistakes and lose-shift mistakes are lower compared to Study
1a.
H2: Monetary incentives manipulated between participants influence decision
making: Error rates are lower for participants who are paid with 36 cents per successful
draw than for participants who receive 18 cents for each successful draw.
H3: Skills in statistics influence decision making: Error rates are lower for
participants with higher statistical skills. Faith in intuition might affect error rates if some
affective valence is retroactively attached to the first feedback.
H4: There is no FRN response following the outcome of the first draw because
feedback on the first draw is affectively neutral and does not indicate any valence.
H5: There is a typical FRN response following negative feedback on the second
draw. The amplitude of this FRN might be related to decision making if some affective
valence is retroactively attached to the first feedback. In this case, there is a positive
relationship between the rate of reinforcement mistakes and the magnitude (negativity) of
the FRN amplitude following a negative outcome for the second draw.
2.2.2 Method
Participants
51 right-handed participants (20 male, 31 female) with normal or corrected-tonormal vision, ranging in age from 19 to 28 years (M = 22.3, SD = 2.23), were recruited
from the student community at the University of Konstanz. As in Study 1a, students who
majored in economics were not allowed to take part in the study. In exchange for
participation, participants received 8 € plus a monetary bonus that depended upon the
outcomes of the computer task, as described below. All participants signed an informed
consent document before the start of the experiment. The individual sessions took
approximately 90 minutes.
Study 1b
81
Five of the participants were excluded from data analysis. Two did not properly
understand the rules of the decision task24, the other three reported severe problems in
remembering the winning color of the current trial for the second draw.
Design
The study followed a 2 between (magnitude of incentives: high – 36 cents per
successful draw vs. low – 18 cents per successful draw) x 2 within (counterbalance:
payment for a blue ball vs. payment for a green ball) mixed-model design. Dependent
variables were the same as in Study 1a.
Material and Procedure
The stimulus material and the procedure in the laboratory were the same as in Study
1a. The computer task was in some aspects different to the one used in Study 1a (see
instructions and experimental protocol in appendix D):
For 60 trials the participant made two draws with replacement. But this time,
participants were only paid for the outcome of the second draw, while the first draw –
which always had to be from the left urn25 – only possessed informative value. Depending
on counterbalancing, the participant was paid for drawing blue balls or green balls. In
contrast to Study 1a, this factor was realized within participants and was determined
randomly in each trial. Participants were not told whether blue or green would pay on the
second draw until after feedback on the first draw was given (blue or green ball). This
means that when receiving feedback on the first draw, participants did not yet know the
winning color of the current trial. In this way, the reinforcement character (valence) of the
first draw was to be removed. Monetary incentives were manipulated between participants:
One group of participants earned 36 cents for every successful second draw, compared to
18 cents in the other group.
For all participants, the distribution of balls in the two urns was as follows: In the
up state, the left urn was filled with 4 blue balls and 2 green balls, and the right urn was
filled with 6 blue balls. In the down state, the left urn had 2 blue balls and 4 green balls,
while the right urn had 6 green balls. Figure 14 provides an overview of the distribution of
balls in the two urns:
24
Their statements in the final questionnaires suggested that one of them did not understand that decisions
were possible at all (this person always pressed the same key in all trials), and that the other one did not
understand the payment method.
25
Therefore, it was not possible to make first-draw mistakes or understanding mistakes, but only
reinforcement mistakes.
Study 1b
82
State of the
World
Probability
Left Urn
Right Urn
Up
50 %
4 blue,
2 green
6 blue
Down
50 %
2 blue,
4 green
6 green
Figure 14. Distribution of balls in the two urns.
The sequence of events was as follows: First, the possible distributions of balls in the urns
(see Figure 4) were presented. After the participant pressed the space bar, there was an
inter-stimulus-interval of 500 ms. Subsequently, the two urns (see Figure 2) were presented
until the left urn was chosen by pressing the corresponding key. After 1000 ms, feedback
was presented under the left urn for another 1000 ms. Then, the winning color of the
current trial (blue or green) was announced (“Blau gewinnt” [“Blue wins”] or “Grün
gewinnt” [“Green wins”]) which was determined randomly by the computer. After 1000
ms, an instruction to go on to the second draw by pressing the space bar appeared until the
participant did so. Then again the two urns plus the feedback on the first draw were shown
until the participant chose an urn for the second draw. Again, after 1000 ms, feedback was
presented under the chosen urn for 1000 ms. Finally, an instruction to proceed to the next
trial appeared. Altogether, the sequence was the same as in Study 1a except for the
announcement of the winning color after feedback on the first draw26.
The questionnaire participants filled out subsequently to the computer experiment
(see appendix D) consisted of the same questions as in Study 1a, with addition of the Big
Five Inventory-SOEP (BFI-S; Gerlitz & Schupp 2005), some questions concerning the
EEG measurement and the experimental procedure, and some further questions related to
the financial incentives provided.
26
The announcement of the winning color was somewhat problematic. Since this information disappeared
when participants went on to the second draw (which was necessary with regard to ERP measurement),
participants had to keep the winning color of the current trial in mind and were not able to review it. Three
participants reported severe difficulties remembering the current winning color and were therefore excluded
from data analysis (see above).
Study 1b
83
EEG Acquisition and Analysis
Data were acquired and analyzed as described above for Study 1a. Epochs were
locked to first-draw pooled feedback27 and to second-draw positive and negative feedback.
This produced three average waveforms per participant.
Concerning feedback on the first draw, two participants were eliminated from
further analysis due to excessive trials containing artifacts28. An average of 21 % of the
trials for the remaining participants were excluded. Regarding the analysis of the second
feedback, one participant was eliminated due to excessive artifacts29 and an average of
22 % of the trials for the remaining participants were excluded.
2.2.3 Results
First Overview
At the beginning of analyses, the participants‟ statements were examined in order to
find out how they had worked on the task, had perceived the task, and how they assessed
themselves concerning skills in statistics. Most ratings were given on an analog scale
(length: 10 cm; i.e., scores ranged continuously between 0 and 10).
As assessed by the experimenter, participants had a quite good understanding of the
rules of the experiment (on a 5 cm analog scale: M = 4.03, SD = 0.83) provided by means
of detailed written instructions. Self-reports indicated that more than adequate effort was
made in accomplishing the experiment properly (M = 8.45, SD = 1.05). Participants stated
that they found it quite important to perform well (M = 7.14, SD = 1.93). The possibility to
win a considerable amount of money was self-assessed as a relevant factor for participative
motivation (M = 6.62, SD = 2.59) and as a very motivating factor for performing well (M =
8.10, SD = 1.47). On average, participants considered the task not very difficult (M = 2.81,
SD = 2.04). The self-assessed level of general statistical skills (M = 4.56, SD = 2.68) and of
skills in calculating probabilities (M = 4.61, SD = 2.70) were reported as moderate.
Concerning the sources of such skills, most participants named courses in the context of
their studies or lessons at school, others referred to games, voluntary readings or former
experiments at university. 27 of 43 participants (62.8 %) who gave full particulars majored
27
Since the winning color of the current trial was only announced after feedback on the first draw, the first
feedback actually did not possess any valence. Therefore the pooled feedback was examined.
28
Data of these participants were included in the analysis of behavioral observations (e.g., error rates) and
self-reports, though.
29
See footnote 28.
Study 1b
84
in a subject with obligatory statistics courses, and 11 of 45 (24.4 %) participants declared
they were already experienced with the type of decision scenario implemented in the
experiment.
According to self-reports, participants did not find decision making to be very
stressful while EEG data were simultaneously recorded (M = 3.24, SD = 2.35). Wearing
the cap and feeling the gel on the scalp were not considered unpleasant (M = 1.47, SD =
1.75). Participants found it quite difficult to avoid body movements and eye blinks (M =
6.81, SD = 2.14). Altogether, participation in the study was rated as being unstressful (M =
2.29, SD = 1.74).
On average, participants won 8.50 €, with a range from 4.32 € to 13.68 €. The
duration of the computer task averaged about 14 minutes, with considerable variance
between participants (see Table 7):
Table 7
Amount won and duration of the computer task depending on magnitude of incentives
Amount Won in €
Incentives
Duration in Minutes
M (SD)
min
max
M (SD)
Median (IQR)
min
max
High
11.38 (1.27)
9.00
13.68
14.10 (2.42)
13.41 (2.65)
10.37
20.15
Low
5.63 (0.63)
4.32
6.84
13.66 (2.83)
13.26 (3.40)
9.81
23.33
Total
8.50 (3.07)
4.32
13.68
13.88 (2.61)
13.41 (2.92)
9.81
23.33
Equivalence of Groups
Study 1a. Compared to the participants of Study 1a (20 of 45 = 44.4 %), marginally
significantly more participants majored in a subject with obligatory statistics courses (27 of
43 = 62.8 %), χ²(1, N=88) = 2.97, p = .09. Further, participants in Study 1b (M = 6.39, SD =
1.37) had significantly lower values in need for cognition than participants of Study 1a (M
= 7.28, SD = 1.21), t(88) = 3.26, p < .01. Apart from that, there were no other noteworthy
differences between the two studies (p-values > .218).
Incentives. Before the main analyses, it was tested for differences between the
high-incentive (n = 23) and the low-incentive condition (n = 23) concerning several factors
that might have had an influence on the behavioral results. In line with expectations, the
amount of money participants estimated they would win prior to the start of the computer
task was significantly higher in the high-incentive condition (M = 8.60, SD = 3.99) than in
Study 1b
85
the low-incentive condition (M = 4.85, SD = 2.17), t(43) = 3.90; p < .001. There were no
noteworthy differences between these two conditions (all p-values > .111).
Counterbalancing. As participants were paid for drawing blue balls in about 50 %
of the trials and for drawing green balls in the other 50 %, it was checked whether blue and
green stimuli were evaluated differently (see Table 8). In line with expectations, valence
ratings as well as arousal ratings for blue and green balls did not differ (p-values > .313).
Table 8
Evaluation of stimuli
Valence
M (SD)
Arousal
M (SD)
Blue Ball
2.84 (1.09)
2.76 (1.00)
Green Ball
3.13 (1.10)
2.91 (1.15)
Error Rates
The following table provides an overview over error rates for the two possible types
of errors:
Table 9
Error frequencies depending on magnitude of incentives
Incentives
Win-Stay Mistakes
Lose-Shift Mistakes
High
16.2 % (647) [4]
22.9 % (733) [4]
Low
25.7 % (666) [2]
32.8 % (714) [2]
Total
21.0 % (1313)
27.8 % (1447)
Note. Total number of observations in brackets and error costs (in cent) in square brackets
Testing the hypothesis that removing valence from the first draw would reduce the rates of
reinforcement mistakes (H1), error rates of the present study were compared to those of the
previous study. Regarding win-stay mistakes, the error rate decreased from 64.9 % in Study
1a to 21.0 % in Study 1b, a difference of 43.9 %. For lose-shift mistakes, it decreased from
51.9 % to 27.8 %, a difference of 24.1 %. Fisher‟s exact tests revealed that in the present
condition, significantly less win-stay mistakes (p < .001) and lose-shift mistakes (p < .001)
were committed than in Study 1a. Mann-Whitney U tests applied to individual error rates
Study 1b
86
confirmed a significant difference for win-stay mistakes (z = 4.65, p < .001) and for loseshift mistakes (z = 3.30, p < .01). This means that the reduction was somewhat more
pronounced for errors after a successful outcome compared to errors after an unsuccessful
outcome of the first draw.
Next, it was tested whether and how financial incentives affected error rates. According to
H2, participants paid with 36 cents per successful draw were expected to have a lower
error rate than participants paid with 18 cents. Fisher‟s exact tests were applied to compare
the two conditions aggregatedly over all participants. Testing revealed a significant
difference in the predicted direction for both types of mistakes, p-values < .001. In
situations when participants received 36 cents as opposed to 18 cents per successful draw,
a lower number of win-stay mistakes and lose-shift mistakes were committed. This was
confirmed in the analysis of individual error rates using Mann-Whitney U tests. There was
a significant difference for win-stay mistakes (z = 1.78, p < .05, one-sided) and a
marginally significant difference for lose-shift mistakes (z = 1.48, p = .07, one-sided)
between the two incentive conditions.
Contrary to the findings reported by Charness and Levin (2005), participants‟ behavior
changed over the course of the sessions, at least for lose-shift mistakes (see Table 10 and
Figure 15). While there was no clear trend for win-stay mistakes, it seems that the rate of
lose-shift mistakes dropped over the course of the experiment. This might indicate a type
of learning effect.
Table 10
Error frequencies depending on trial number
Trial Number
1-15
16-30
31-45
46-60
Win-Stay Mistakes
22.9 %
21.8 %
21.0 %
18.5 %
Lose-Shift Mistakes
35.1 %
26.9 %
26.9 %
22.2 %
Error Frequency in %
Study 1b
87
Win-Stay Mistakes
Lose-Shift Mistakes
40
30
20
10
1-15
16-30
31-45
46-60
Trial Number
Figure 15. Error frequencies depending on trial number.
Again, as in the former study, there was considerable behavioral heterogeneity across
individuals, demonstrated by the variance of individual error rates (see Figure 16). Error
rates ranged from zero to over ninety percent, but due to the enormous variance (M =
24.57, SD = 25.16) there were no outliers or extreme values.
Number of Participants
14
High Incentives
Low Incentives
12
10
8
6
4
2
0-10
10-20
20-30
30-40
40-50
50-75
75-100
Individual Error Rate in %
Figure 16. Individual heterogeneity regarding error rates.
FRN in Response to First-Draw Feedback
The grand-average ERP waveform (n = 44) from channel FCz following the
presentation of the first drawn ball is shown in Figure 17. The color of this first ball was
not of an evaluative nature because at this point of time the winning color of the trial had
not yet been announced. Therefore, the present figure does not distinguish between blue
and green balls. By inspecting the waveform visually it was found that consistent with
Study 1b
88
expectations (H4), there was no typical FRN response in the time-window between 200
and 300 ms after presentation of the first drawn ball30.
Amplitude in µV
-5
FCz
0
5
10
-250
0
250
500
750
Time in ms
Figure 17. Grand-average ERP in response to first-draw feedback.
FRN in Response to Second-Draw Feedback
For a trial‟s second draw, the participants knew which the winning color was; thus
it made sense to analyze the FRN in response to the feedback on this draw (i.e., feedback
whether or not they had won). The grand-average ERP waveforms (n = 45) from channel
FCz as a function of the reward valence of the outcome of the second draw are shown in
Figure 18A. In line with expectations (H5), a substantial FRN with a frontocentral scalp
topography was evident which tended to be more pronounced for negative feedback
compared to positive feedback. The corresponding topographies of the FRN are shown in
Figure 18B.
30
Note that in addition, 1000 ms later the information which color would be paid for in the next (i.e., the
second) draw was presented. ERP responses to this announcement were not analyzed (see 2.2.4).
Study 1b
89
A
B
-10
FCz
Negative Feedback
Positive Feedback
Amplitude in µV
-5
6 µV
0
FRN
0
5
10
15
-6
Negative Feedback
Positive Feedback
20
25
-250
0
250
500
750
Time in ms
Figure 18. (A) Grand-average FRN as a function of valence of second-draw feedback. (B)
Topographical mapping of the FRN as a function of valence of second-draw feedback.
Using a median split procedure (separately for the two incentive conditions), participants
were classified as having either a low or high rate of reinforcement mistakes31. A 2 within
(valence of feedback: negative vs. positive) x 2 between (magnitude of incentives: low vs.
high) x 2 between (rate of reinforcement mistakes: low vs. high) ANOVA on the amplitude
of the FRN in response to second-draw feedback revealed a significant main effect of
valence of feedback, F(1, 41) = 32.43, p < .001, η² = .44. No significant main effect for
magnitude of incentives was observed, F(1, 41) = 2.66, p = .11, η² = .06, and no main
effect for rate of reinforcement mistakes, F < 1. The valence of feedback x rate of
reinforcement mistakes interaction was significant, F(1, 41) = 7.15, p < .05, η² = .15,
whereas all other interactions did not reach significance (magnitude of incentives x rate of
reinforcement mistakes: F(1, 41) = 2.20, p = .14, η² = .05; valence of feedback x
magnitude of incentives: F < 1; valence of feedback x magnitude of incentives x rate of
reinforcement mistakes: F < 1).
T-tests for unpaired samples were computed to further analyze the significant twoway interaction. Results showed that only for second-draw negative feedback, participants
with a low rate of reinforcement mistakes (M = 4.27, SD = 3.30) had a significantly smaller
FRN amplitude than participants with a high rate of reinforcement mistakes (M = 6.24, SD
= 4.28), t(43) = 1.74, p < .05 (one-sided). For positive feedback, the difference between
31
In the low-incentive group, median percentage of reinforcement errors was 27.5 %, as opposed to 5.0 % in
the group paid with high incentives.
Study 1b
90
participants with a low rate of reinforcement mistakes (M = 2.90, SD = 3.62) and
participants with a high rate of such mistakes (M = 2.47, SD = 2.70) did not reach
significance, t < 1.
In Figures 19 to 22, all grand-average waveforms are plotted with the
corresponding topographies of the FRN at FCz, depending on magnitude of incentives and
rate of reinforcement mistakes. Table 11 presents FRN amplitude values at FCz.
A
Low Incentives (n = 22)
-10
FCz
-10
Negative Feedback
Positive Feedback
-5
0
-5
Amplitude in µV
Amplitude in µV
High Incentives (n = 23)
5
10
15
20
0
5
10
15
20
25
25
-250
0
500
250
750
-250
Time in ms
0
250
500
750
Time in ms
B
6 µV
no FRN
0
-6
Negative Feedback
Positive Feedback
Negative Feedback
Positive Feedback
Figure 19. (A) Grand-average FRN as a function of magnitude of incentives and valence of
second-draw feedback. (B) Topographical mapping of the FRN as a function of magnitude of
incentives and valence of second-draw feedback.
Study 1b
91
A
Low Rate of ReinforcementMistakes (n = 23)
-10
FCz
-10
Negative Feedback
Positive Feedback
-5
0
-5
Amplitude in µV
Amplitude in µV
High Rate of ReinforcementMistakes (n = 22)
5
10
15
20
0
5
10
15
20
25
25
-250
0
500
250
750
-250
Time in ms
0
250
500
750
Time in ms
B
6 µV
0
-6
Negative Feedback
Positive Feedback
Negative Feedback
Positive Feedback
Figure 20. (A) Grand-average FRN as a function of rate of reinforcement mistakes and valence of
second-draw feedback. (B) Topographical mapping of the FRN as a function of rate of
reinforcement mistakes and valence of second-draw feedback.
Study 1b
92
Low Incentives
A
Low Rate of ReinforcementMistakes (n = 11)
-10
FCz
-10
Negative Feedback
Positive Feedback
-5
0
-5
Amplitude in µV
Amplitude in µV
High Rate of ReinforcementMistakes (n = 11)
5
10
15
20
0
5
10
15
20
25
25
-250
0
500
250
750
-250
Time in ms
0
250
500
750
Time in ms
B
6 µV
no FRN
0
-6
Negative Feedback
Positive Feedback
Negative Feedback
Positive Feedback
Figure 21. (A) Grand-average FRN in the low-incentive condition as a function of rate of
reinforcement mistakes and valence of second-draw feedback. (B) Topographical mapping of the
FRN in the low-incentive condition as a function of rate of reinforcement mistakes and valence of
second-draw feedback.
Study 1b
93
High Incentives
A
Low Rate of ReinforcementMistakes (n = 12)
-10
FCz
-10
Negative Feedback
Positive Feedback
-5
0
-5
Amplitude in µV
Amplitude in µV
High Rate of ReinforcementMistakes (n = 11)
5
10
15
20
0
5
10
15
20
25
25
-250
0
500
250
750
-250
Time in ms
0
250
500
750
Time in ms
B
6 µV
0
-6
Negative Feedback
Positive Feedback
Negative Feedback
Positive Feedback
Figure 22. (A) Grand-average FRN in the high-incentive condition as a function of rate of
reinforcement mistakes and valence of second-draw feedback. (B) Topographical mapping of the
FRN in the high-incentive condition as a function of rate of reinforcement mistakes and valence of
second-draw feedback.
Study 1b
94
Table 11
Mean peak amplitude (grand-average) of the FRN as a function of magnitude of incentives, rate of
reinforcement mistakes, and valence of second-draw feedback
Valence of Feedback
Magnitude of
Incentives
Rate of
Reinforcement
Mistakes
Low
High
Total
Negative
M (SD)
Positive
M (SD)
Low
2.68 (2.97)
1.50 (1.42)
High
6.47 (5.21)
2.11 (2.83)
Total
4.58 (4.57)
1.81 (2.21)
Low
5.73 (2.98)
4.19 (4.55)
High
6.01 (3.33)
2.83 (2.63)
Total
5.87 (3.08)
3.54 (3.74)
Low
4.27 (3.30)
2.90 (3.62)
High
6.24 (4.28)
2.47 (2.70)
Total
5.24 (3.89)
2.69 (3.18)
Regression Analyses
As in Study 1a, the data form a strongly balanced panel, with 60 second-draw
observations per participant. Data were again analyzed by means of random-effects probit
regressions on second-draw decisions (error vs. correct decision), due to non-independence
of observations within participants.
The results of the regression analyses are presented in Table 12. As postulated in
H2 and consistent with the results of the Fisher‟s tests and Mann-Whitney U tests reported
above, errors were significantly less likely in the high-incentive condition. This means that
simultaneously the opportunity costs of errors significantly affected error rates32. The
counterbalance dummy was again not significant which means that the winning color did
not affect decision behavior in a particular way, whether manipulated between or within
participants. As already observed in Study 1a, time (trial number divided by 60) also had a
highly significant negative effect on error rate.
Concerning personal characteristics of participants, no gender effect was found. In
the final regression model, self-assessed knowledge in statistics significantly affected error
32
Opportunity cost of errors was not included in the regression as a separate variable due to collinearity with
the incentive dummy.
Study 1b
95
likelihood, which was in line with expectations (H3). Faith in intuition was a highly
significant positive predictor of error rate. Additionally, a high level of extraversion and
neuroticism made errors more likely to occur. Finally, a large FRN amplitude in response
to second-draw negative feedback made errors more likely (H5).
Table 12
Random-effects probit regressions on second-draw errors (0=correct choice, 1=error)
Variable
Probit 1
Probit 2
Probit 3
Probit 4
Incentives (1 = High)
-0.543*
(0.290)
0.028
(0.062)
-0.450***
(0.107)
-0.617**
(0.261)
0.028
(0.062)
-0.451***
(0.107)
-0.585**
(0.245)
0.028
(0.062)
-0.452***
(0.107)
-0.655***
(0.243)
0.028
(0.062)
-0.452***
(0.107)
0.168
(0.297)
0.149
(0.268)
-0.066
(0.051)
0.288***
(0.094)
0.259
(0.256)
-0.089*
(0.049)
0.276***
(0.088)
0.118**
(0.059)
0.114**
(0.058)
0.342
(0.254)
-0.095**
(0.048)
0.278***
(0.086)
0.107*
(0.058)
0.122**
(0.057)
Counterbalance
Time
Personal Characteristics
Gender (1 = Male)
Knowledge In Statistics
Faith In Intuition33
Extraversion34
Neuroticism35
Electrocortical Responses
FRN (Neg FB2)36
Constant
log likelihood
Wald test
0.541*
(0.322)
-0.509**
(0.245)
-1127.954
21.57***
-1.791***
(0.619)
-1122.609
33.85***
-3.043***
(0.778)
-1119.815
41.46***
-3.294***
(0.775)
-1118.427
45.17***
Notes. Number of observations = 2700 (45x60). Numbers in brackets are standard errors.
*p < .10. **p < .05. ***p < .001
33
Internal consistency as measured by Cronbach‟s alpha was good (alpha = .84).
Internal consistency as measured by Cronbach‟s alpha was good (alpha = .86).
35
Internal consistency as measured by Cronbach‟s alpha was satisfactory (alpha = .73).
36
To make this measure more comparable to the other variables, FRN amplitude was multiplied by a factor
of 10.
34
Study 1b
96
2.2.4 Discussion
By altering the Bayesian updating task so that the reinforcement heuristic did not
apply in the same way as in Study 1a (see Charness & Levin, 2005; see also Charness et
al., 2007), it was investigated if controlled processes (i.e., applying the Bayes‟ rule) were
more possible as compared to Study 1a, where the simple automatic strategy was directly
available and conflicted with Bayesian updating. More precisely, in Study 1b new
information was provided by the first draw as in Study 1a; however, this time the
reinforcement experience (valence) associated with this draw was eliminated as
participants were neither paid for their decision outcome nor did they have the possibility
to immediately associate this outcome with success or failure. That is, by presenting the
outcome of the first draw, participants only received new information with which they
could infer the distribution of balls in the two urns. Valence could only possibly be
attached to the drawn balls by the participants ex post facto after the winning color of the
current trial had been announced (by retroactively defining the drawn ball as a winning or
losing ball). Study 1b focused again on error rates and the feedback-related negativity
(FRN), as well as on other relevant personality variables (e.g., faith in intuition).
The following issues were addressed in the present study: (1) Is the error rate after a
first draw from the left urn reduced when the emotional attachment of feedback is
removed? (2) Do higher monetary incentives, manipulated between participants, reduce
error rates? (3) Are skills in statistics and faith in intuition related to error rates? (4) Can an
FRN be observed if the outcome of the first draw only covers information about the
possible distribution of balls in the urns and not affective information such as success and
failure? (5) Is there an FRN response after a second-draw negative outcome and if so, is its
amplitude positively associated with the rate of reinforcement mistakes?
Overall, both the error rates and the ERP data of Study 1b confirmed the
assumption that it is the emotional attachment of feedback which drives reinforcement
learning and leads to a high rate of switching errors as observed in Study 1a. Error rates
were much lower in the present study in which there was no direct reinforcement based on
the first draw‟s outcome because this outcome did not indicate success or failure. ERP
results showed that for this neutral feedback, no FRN response was evident. In contrast, a
typical FRN was visible in response to second-draw negative feedback. The amplitude of
this FRN was positively correlated with participants‟ rate of reinforcement errors. This
suggests that reinforcement experiences were not completely absent from Study 1b, but
that there might have been a kind of retroactive reinforcement due to the subsequent
Study 1b
97
announcement of the winning color. As in Study 1a, faith in intuition was correlated with
participants‟ decision behavior, also suggesting that at least some affective attachment was
still present in the altered paradigm of Study 1b. In the present study, monetary incentives
manipulated between participants significantly affected error rates. That is, participants
paid with doubled incentives committed significantly fewer errors.
On the whole, the results of this follow-up study support the conclusions of Study
1a. If feedback that can be used for later economic decisions is attached to emotional
quality (valence), an automatic reinforcement process which ignores prior probabilities
conflicts with a more controlled, deliberative process of Bayesian updating which can be
considered rational in an economic sense. But when this emotional attachment is removed,
the reinforcement heuristic is no longer directly available and cannot disturb the processes
of Bayesian updating to such a high degree. In this case, people tend to apply more
controlled strategies.
Results of the self-questionnaires revealed that on average, participants put a lot of effort
into accomplishing the experiment properly and found it important to perform well. They
rated the task as not difficult, which was in accordance with the experimenter‟s ratings of
participants‟ understanding of the rules of the decision task. The performance-related
incentives were a motivating factor for participation and good performance.
Participants showed at least some learning over the course of the experiment. The
probit regression analyses revealed a highly significant negative correlation between trial
number and the likelihood of committing an error. In contrast to Study 1a, this change in
decision behavior over time was also apparent from the analysis of error rates for different
sections of the task which suggests the learning effect was somewhat more pronounced
than in the previous study.
Rates of win-stay mistakes and lose-shift mistakes (i.e., reinforcement mistakes) were
considerably lower than in Study 1a, which confirms H1 and is in agreement with the
findings of Charness and Levin (2005; see also Charness et al., 2007) and Achtziger and
Alós-Ferrer (2010). As expected, participants committed few errors after draws from the
left urn when Bayesian updating did not conflict with a more automatic strategy. This
finding confirms the assumption that the high rate of switching errors after left draws in
Study 1a actually resulted from the emotional attachment of feedback (valence) which
triggered the reinforcement heuristic. When this simple strategy was not directly available,
Study 1b
98
people seem to have noticed the need for Bayesian updating to a much greater extent. In
the studies by Charness and Levin (2005) and Charness et al. (2007), the reduction of error
rates was more pronounced for win-stay mistakes compared to lose-shift mistakes. In the
present study a similar finding was obtained which could be interpreted as that the
reinforcement quality of success is generally stronger than that of failure (e.g., Joelving,
2009).
Interpreting this reduction of errors from a dual-process perspective, one could
suggest that, upon the removal of an immediate emotional reaction, the focus is shifted
from the impulsive to the reflective system, thereby facilitating the controlled processing
of information. In actuality, the relationship between an emotional state and decision
making is more complex. The dual-process theory of Strack and Deutsch (2004), for
example, maintains that the evaluative characteristics of information (e.g., success vs.
failure) create an action orientation towards approach or avoidance behavior (Gray, 1982),
respectively. This link between experienced valence and action orientation constitutes a
characteristic element of automatic as opposed to controlled processes. In the context of
the present paradigm, approach and avoidance behavior can be analogized to staying with
one urn or switching to the other. By removing valence from the first draw‟s outcome, the
links between experienced valence and action orientation (success and approach behavior;
failure and avoidance behavior) are eliminated. In this way, automatic reinforcement
processes no longer apply and the deliberate evaluation of information is facilitated.
One minor limitation to this conclusion lies in the fact that in the present study,
more participants majored in a subject with obligatory courses in statistics than in Study
1a. However, it does not seem probable that this small difference can solely explain the
high difference in error rates between the two studies.
H2 could also be confirmed by the data. Both in the aggregate and individual analysis of
error rates, participants who received 36 cents per successful draw committed significantly
fewer errors than those who received 18 cents. The probit regression analyses determined
that the incentive dummy (interchangeable with the opportunity costs of errors) had a
highly significant effect on error frequencies, which is in agreement with the findings of
Charness and Levin (2005) and Achtziger and Alós-Ferrer (2010), but in contrast to the
findings of Study 1a. According to the idea of an effort-accuracy trade-off (Beach &
Mitchell, 1978; Payne et al., 1988, 1992), high-stake decisions make the adoption of
deliberate decision strategies more probable. In this way higher monetary incentives can
Study 1b
99
increase the effort to solve the decision problem because the decision maker wants to reach
a higher level of confidence before finally making a choice.
Given the significant between-participants effect in this study, the question of why
such an effect of monetary incentives was absent in Study 1a arises. Following Camerer
and Hogarth‟s (1999) line of argument, it could be argued that by altering the paradigm
and removing valence from the first draw, the task has become easier; thus the perceived
return-to-effort ratio has increased. Bonner and Sprinkle (2002) state that the probability of
which an incentive will positively affect performance is inversely related to the complexity
of the task. Since participants were no longer misled by the reinforcement heuristic
anymore and were not subjected to decision conflict in Study 1b, the task could be
considered to be less complex than that in Study 1a. Camerer and Hogarth point out that
“[t]here is no replicated study in which a theory of rational choice was rejected at low
stakes in favor of a well-specified behavioral alternative, and accepted at high stakes.”
(1999, p. 33). However, since the reinforcement heuristic did not apply in the altered
paradigm as in Study 1a, there was no “well-specified behavioral alternative” that
participants could have relied upon. The reduced complexity and absence of a simple
automatic strategy may have been the reason why monetary incentives did affect decision
performance in the present study. Another possible explanation is that the difference in
payment per successful draw was 18 cents (18 cents vs. 36 cents) compared to only 9 cents
in Study 1a (9 cents vs. 18 cents). After all, participants in the high-incentive condition
reported a significantly higher expected win amount than participants in the low-incentive
condition. In contrast, the two incentive conditions of Study 1a did not differ significantly
concerning the amount of money they expected to win before the start of the experiment.
Again, variability in participants‟ personalities had a significant influence on decision
making and thus on error rates. As in Study 1a, this went along with a huge interindividual
heterogeneity in error rates. There were a good number of individuals with error rates in
nearly every range, a result consistent with the findings of Charness and Levin (2005).
In line with expectations (H3), the self-assessed level of statistical skills was related
to decision performance (as has also been the case in Study 1a). Participants who
considered themselves as possessing good statistical skills committed fewer errors than
those participants who considered themselves as possessing poor statistical skills.
Faith in intuition (Epstein et al., 1996) was again highly related to the rate of
reinforcement mistakes. In Study 1a it was assumed that the affective attachment of
Study 1b
100
feedback implicitly led to the activation of the previously successful behavioral option or
inhibited the previously unsuccessful option (Glöckner & Witteman, 2010a). The positive
association between rate of reinforcement errors and faith in intuition was interpreted such
that intuitive participants relied on their gut feelings to a greater extent than those who
described themselves as less intuitive. If the affective attachment of the first feedback had
been completely removed by altering the paradigm in Study 1b, these gut feelings should
not have been aroused so that individual differences in faith in intuition should not have
affected decision making. The fact that intuitive people still showed a significantly higher
rate of reinforcement errors in Study 1b suggests that the “win-stay, lose-shift” strategy
was still relevant for them. Perhaps they attached valence to the first extracted ball by
retroactively defining it as a winning or losing ball after announcement of the winning
color, despite the fact that they were not paid for this outcome in any case. This means that
although the reinforcement heuristic was no longer triggered in the same way as in Study
1a (which was substantiated by the largely reduced rates of switching errors), there was
still a clue for the simple strategy and possibly even the retroactive experience of
reinforcement.
Contrary to Study 1a, no gender effect on participants‟ error rates was observed in
the present study. This finding is opposed to the findings by Charness and Levin (2005)
and Achtziger and Alós-Ferrer (2010) who found that female participants tended to deviate
from the Bayes‟ rule more often than male participants. Conceivably, the gender
differences seen in both studies (and also in Study 1a) were not real differences in the
participants‟ ability to utilize Bayesian updating, but rather they occurred due to
differences in other variables confounded with error rates which were not controlled for.
As already mentioned, this interpretation relies on the fact that the gender effect was
somewhat reduced by controlling for knowledge of statistics in Study 1a.
Interestingly, participants‟ level of extraversion and neuroticism (as measured by a
short version of the Big Five Inventory; Gerlitz & Schupp 2005) showed a significant
positive relation to error rates. As for neuroticism, it has repeatedly been shown that this
tendency to experience negative emotions influences performance behavior (e.g.,
Chamorro-Premuzic & Furnham, 2003; Pham, 2007) and is associated with individual
differences in decision making (e.g., Maner et al., 2007), especially with decision making
deficits (e.g., Davis, Patte, Tweed, & Curtis, 2007; Denburg et al., 2009). Hilbig (2008) for
example found that neuroticism was a predictor of participants‟ use of a simple decision
heuristic. He argued that neurotic individuals might avoid strategies which are diagnostic
Study 1b
101
of their abilities since they are more likely to experience post-failure shame and/or have
less trust in their knowledge. Therefore, these individuals prefer a simple heuristic strategy.
Extraversion has been linked to the strength of the midbrain dopaminergic reward system,
such that more extraverted individuals receive larger phasic bursts of dopamine in response
to a potential reward (Depue & Collins, 1999). Cohen et al. (2005) found that extraversion
was associated with increased activation in reward-sensitive regions when receiving
rewards. Highly extraverted people seem to have a strongly pronounced „„hot” incentive
motivation network, which may be the reason for why they prefer immediate rewards
(Hirsh, Morisano, & Peterson, 2008; Ostaszewski, 1996). Due to a more sensitive
dopaminergic reward system, extraverted people experience difficulties in regulating their
impulsivities. Perhaps neurotic as well as extraverted individuals were more prone to rely
on the “win-stay, lose-shift” strategy in the present paradigm (again assuming that a firstdraw ball was defined after-the-fact as being either winning or losing). Unfortunately,
these measures were not included in the questionnaire of Study 1a. Therefore, it is
uncertain whether these person factors are particularly relevant to reinforcement mistakes
or to all types of errors in general.
H4, which postulated an absence of an FRN response after feedback on the first draw, was
confirmed. There was no FRN evident which can be explained the fact that the outcome of
the first draw was presented without linking it to success or failure (i.e., it was free of
valence), and was only presented to give participants the chance to update their prior
probabilities concerning the distribution of balls in the urns. This finding strongly supports
the argument that the differentiated FRN responses observed in Study 1a were responses to
the valence of the feedback and not general responses to the observance of the drawn ball.
At first glance, the finding of an absent FRN response on neutral feedback might
seem to contradict results of previous studies by Holroyd and colleagues (Hajcak et al.,
2006; Holroyd et al., 2006; see also Toyomaki & Murohashi, 2005). In these studies it was
found that neutral feedback (either a signal indicating receipt/loss of nothing, or an
uninformative signal) consistently elicited an FRN which was comparable in amplitude to
that which arose in response to negative feedback. But in contrast to the present study, the
participants in the aforementioned studies could potentially win or lose money in the
respective trials and therefore expected (or hoped for) positive feedback. Therefore, neutral
feedback in these paradigms indicated that a goal had not been satisfied. However, in the
Study 1b
102
present study, feedback on the first draw was always free of valence and could not even
potentially indicate the achievement of a goal.
In contrast to electrocortical responses to the feedback of the first-draw‟s outcome, seconddraw feedback was followed by a typical FRN response. This second-draw FRN tended to
be more pronounced in response to negative vs. positive feedback. This was in line with
the predictions of H5. Moreover, in both incentive conditions the amplitude of the FRN in
response to second-draw negative feedback was related to individual differences in
reinforcement behavior. Repeated-measures ANOVAs demonstrated that the amplitude of
the FRN was more pronounced (i.e., more negative) for participants with a high vs. a low
rate of reinforcement mistakes. Taking the opposite approach, the regression analyses
revealed that the likelihood of reinforcement errors was significantly positively predicted
by the amplitude of the FRN in response to second-draw negative feedback. This finding
suggests that despite the alterations of the paradigm, reinforcement experiences were not
completely precluded in Study 1b. If there had been no reinforcement from the outcome of
the first draw at all, individual differences in the predisposition for reinforcement learning
(as reflected in FRN amplitude) should not have mattered for error rates. The significant
positive association between FRN amplitude and the rate of switching errors suggests that
(as previously argued for faith in intuition) after the announcement of the winning color,
participants may have retroactively attached valence to the feedback. In this way, a “winstay, lose-shift” strategy (which is constitutive of reinforcement learning) could still be
applied even though such a “winning” ball was introduced as neutral information in the
first place and was not paid for. Under this assumption it would make sense that
participants with a large FRN amplitude (which is assumed to be an indicator of
reinforcement learning; Holroyd & Coles, 2002; Holroyd et al., 2002; Holroyd et al., 2003;
Holroyd et al., 2005) made their decisions in the second draw dependent upon whether
their first draw produced a “winning” or “losing” ball37. However, in light of the
significantly reduced error rates in Study 1b, it can be assumed that this retroactive
37
If such a retroactive reinforcement actually took place, then the announcement of the winning color of the
current trial (“Blue wins” or “Green wins”) might also have elicited an FRN response in case the winning
color and the color of the first drawn ball did not match (i.e., in case of “negative feedback”). Similar to the
FRN amplitude elicited by negative feedback on the second draw, this FRN might have been more
pronounced for participants with a high rate of reinforcement errors. Unfortunately, this assumption could not
be tested because EEG data after the presentation of this announcement were highly contaminated by ocular
activity probably due to participants‟ reading of the text. It would be advised for future studies to keep the
stimuli more simple and to announce the winning color by only presenting a blue or a green ball, for instance,
in order to be able to analyze corresponding ERPs.
Study 1b
103
reinforcement had much less impact on decision making compared to the affective
attachment of feedback which directly triggered the reinforcement heuristic in Study 1a.
The finding that feedback on the second draw (which was linked to success or
failure, i.e., had an evaluative dimension) elicited an FRN suggests that reinforcement
learning is a fundamental process (see Donahoe & Dorsel, 1997; Holroyd & Coles, 2002;
Sutton & Barto, 1998). Despite the fact that the outcome of the second draw was not
relevant for future decisions (as the second draw was followed by a new trial in which the
state of the world would be determined once again), the modulation of the FRN in response
to second-draw negative feedback was observed as though a type of reinforcement learning
took place. One can argue that if positive or negative valence is attached to a decision‟s
feedback, this feedback inevitably triggers automatic learning processes (at least in a
certain group of people), that is, processes that occur immediately, without further intent,
and which seem to be quite rigid (see Glöckner & Witteman, 2010a; Holroyd & Coles,
2002).
Though not significantly, the relationship between the amplitude of the FRN and
the rate of reinforcement mistakes was somewhat moderated by the magnitude of
incentives. The repeated-measures ANOVA revealed that in both incentive conditions, the
FRN amplitude was more pronounced for participants with a high vs. low rate of
reinforcement mistakes. But visual inspection of the waveforms showed that this difference
was more pronounced in the low-incentive (18 cents) condition. When incentives were
very high (36 cents), the difference between the groups was smaller due to the fact that the
FRN was also strongly pronounced for participants with a low rate of reinforcement
mistakes. This result again implies that the FRN is modulated by the motivational
significance of feedback (Holroyd et al., 2003; Masaki et al., 2006; Takasawa et al., 1990;
Yeung et al., 2005). Perhaps the missing-out on a 36 cents reward was a very aversive
event so that even participants who normally minimally relied on the reinforcement
heuristic exhibited a strongly pronounced FRN response.
As in both Study 1a and Study 1b the relationship between FRN amplitude and rate
of reinforcement mistakes was somewhat affected by the magnitude of incentives, it might
be concluded that the FRN response is dependent upon the size of the monetary incentives.
In Study 1a, there was a weak FRN reponse to first-draw negative feedback for the group
whose participants received very low incentives (9 cents). Thus, the amplitude of the FRN
did not vary between participants with a low vs. high rate of switching errors, possibly due
to a “floor effect”. In the group whose participants received medium incentives (18 cents),
Study 1b
104
a differentiated FRN was observed in response to first-draw negative feedback in Study 1a
and to second-draw negative feedback in Study 1b, respectively. In both cases, the
amplitude of the FRN was more pronounced for those participants with a high rate of
reinforcement mistakes. When monetary incentives were very high (36 cents) in Study 1b,
the FRN for all participants was strongly pronounced. It was only slightly more negative
for participants with a high rate of reinforcement mistakes. This might be explained in
terms of a “ceiling effect”. A number of recent studies have shown that the magnitude of a
loss (within-subjects) affected the amplitude of the FRN (e.g., Bellebaum et al., 2010;
Goyer et al., 2008; Marco-Pallarés et al., 2009; Onoda et al., 2010; Wu & Zhou, 2009),
consistent with fMRI research indicating that the ACC is sensitive to reward magnitudes
(e.g., Knutson et al., 2005; Rolls et al., 2008). However, similar conclusions from the
present findings should not be drawn since the first draw in Study 1a and the second draw
in Study 1b are not comparable. As shown by the error rates, the two paradigms induced
very different decision behaviors. Thus, the medians by means of which participants were
classified as either low or high reinforcement learners also differed a lot (for instance, a
reinforcement error rate of 40 % would have been classified as low in Study 1a, but as high
in Study 1b). Furthermore, it can be assumed that outcome expectations varied greatly for
the first vs. second draw; thus the corresponding FRN responses are not comparable.
In Study 1b, both affective and monetary reinforcement was removed from the first draw.
Therefore, it is unclear whether the reduction of switching errors was primarily due to the
elimination of emotional attachment (outcomes could not be evaluated as success or failure
so that participants were neither joyful nor disappointed) or primarily due to the fact that
decisions were not rewarded. Further research should attempt to disentangle these two
components and determine the effect of an absence of monetary reinforcement (e.g., by
paying the subject a flat-fee for participation) while maintaing emotional reinforcement
(because feedback can be identified as success or failure). The question then arises of how
to make sure that the participants are sufficiently motivated when payment is not
contingent upon their performance during the study. A situation in which monetary
reinforcement is maintained but emotional reinforcement is removed hardly seems feasible
since affective reactions towards wins and losses can never be avoided.
In summary, the findings of the present study support the conclusions of Study 1a. Rational
decision behavior (i.e., the controlled process of applying the Bayes‟ rule) may be
Study 1b
105
perturbed by a reinforcement process which is more automatic in character. When
reinforcement experiences are constrained, incentives signficantly affect decision
performance and controlled behavior is, to a greater extent, possible. However, intuitive
decision making is not completely precluded. Still, interindividual heterogeneity is relevant
and is again reflected in the amplitude of the FRN.
Study 2
2.3
106
Study 2: Conservatism and Representativeness in Bayesian
Updating – Investigating Rational and Boundedly Rational
Decision Making by means of Error Rates, Response Times, and
ERPs
2.3.1 Overview
In Study 1a, the reinforcement heuristic (Charness & Levin, 2005) was investigated
and consequently identified as an automatic process sometimes conflicting with the
rational process of Bayesian updating. In the present study, the focus is again on two
simple decision rules which might lead to deviations from rational decision behavior under
certain circumstances. These rules are the conservatism heuristic (Edwards, 1968) and the
representativeness heuristic (Grether, 1980, 1992; Kahneman & Tversky, 1972). Again, a
dual-process perspective (see Evans, 2008; Sanfey & Chang, 2008; Weber & Johnson,
2009) is taken and different rationality levels, analog to different levels of automaticity
versus controlledness, are assumed (Ferreira et al., 2006). It is hypothesized that a
controlled process leading to Bayesian updating is often perturbed by other, partly more
automatic processes, depending on several situational and person factors.
Results of many studies have shown that people often base their decisions solely
upon base-rates, ignoring further information (e.g., Dave & Wolfe, 2003; El-Gamal &
Grether, 1995; Grether, 1992; Kahneman & Tversky, 1972; Phillips & Edwards, 1966)
which has been termed conservatism (Edwards, 1968). Until today, there has not been an
all-encompassing explanation which accounts for this non-Bayesian behavior (see 1.2.2.1).
Kahneman (2003) refers to a person‟s tendency to neglect base-rate information as a
property of intuitive judgment. In any case, it can be assumed that conservatism is a simple
decision strategy in the sense of a heuristic (Gigerenzer & Goldstein, 1996; Kahneman et
al., 1982) since it extracts only a few cues from the environment; it ignores a good portion
of available relevant information and it is cognitively parsimonious (e.g., Gigerenzer,
2007). More precisely, it is suggested that conservative people follow a simple “base-rate
only” (Gigerenzer & Hoffrage, 1995) decision rule, disregarding any additional evidence.
Such an assumption can be tested by investigating the lateralized readiness potential (LRP;
de Jong et al., 1988; Gratton et al., 1988; see Eimer, 1998), an event-related potential
which indicates the brain‟s left-right orientation and can be seen as a temporal marker of
the moment in which the brain has begun to make a decision for one response or the other
(see review by Smulders & Miller, forthcoming). Compared to non-conservatives, people
Study 2
107
who rely solely (or too strongly) on base-rate information are assumed to already exhibit
stronger LRP activation before further evidence is presented.
The opposite phenomenon, base-rate neglect, has also been discovered in a lot of
empirical studies. One example is the representativeness heuristic (Camerer, 1987;
Grether, 1980; Kahneman & Tversky, 1972; Tversky & Kahneman, 1971) in which
probability is confused with similarity and base-rates are largely ignored (Tversky &
Kahneman, 1982). The representativeness heuristic corresponds to “matching intuition”
(Glöckner & Witteman, 2010a) whereby an object or situation is automatically compared
with exemplars stored in memory (e.g., Dougherty et al., 1999; Juslin & Persson, 2002).
The resulting pattern recognition process creates a feeling towards a behavioral option or
activates an action script, perhaps subconsciously (Dougherty et al., 1999; Klein, 1993).
Therefore it is assumed that the representativeness heuristic is more automatic in character
than the controlled process of Bayesian updating. Grether (1980, 1992) designed an
experiment in which the representativeness heuristic is sometimes: a) aligned with
Bayesian updating, b) in competition with Bayesian updating, and c) not applicable.
According to a dual-process perspective (see Evans, 2008; Sanfey & Chang, 2008; Weber
& Johnson, 2009), the alignment of, or conflict between, the strategies should be evident in
the decision behavior, response times, and also brain activity. First, a more automatic
process should result in shorter response times. When the two processes are aligned,
response times and error rates should be at their lowest. In these situations, fast decisions
should be associated with few errors. In contrast, when the two decision processes are
opposed (i.e., deliver different responses for the same stimuli), cognitive resources should
be taxed and conflict should occur, resulting in longer response times and a higher error
rate38. In these situations, fast decisions should be associated with more errors. Moreover,
response conflict is assumed to be evident in the amplitude of the N2. The N2 (for a
review, see Folstein & Van Petten, 2008) seems especially suitable for investigating
conflict elicited by the representativeness heuristic since the amplitude of this component
is thought to reflect the amount of response conflict (e.g., Bartholow et al., 2005; Donkers
& van Boxtel, 2004; Nieuwenhuis et al., 2003). The conflict-monitoring theory (Yeung et
al., 2004) predicts a larger N2 on correct high-conflict trials relative to correct low-conflict
38
One example for such a conflict is the Stroop effect (Stroop, 1935). In the stroop task, participants are
required to name the color in which a word is printed, not the color denoted by the word. When the name of a
color is printed in a color not denoted by the name (e.g., the word "red" printed in blue ink instead of red
ink), it is quite difficult to name the ink color (blue) without suffering some interference, which is reflected in
a slowdown of reaction times and a proneness to errors. One explanation for this phenomenon is that reading
the meaning of a word is an automatic process and cannot easily be suppressed.
Study 2
108
trials. Moreover, several studies have found a relationship between N2 amplitude and
individual differences in conflict monitoring (e.g., Amodio, Master, et al., 2008; Fritzsche
et al., 2010; Pliszka et al., 2000; van Boxtel et al., 2001). Therefore, decision makers who
mostly rely upon the representativeness heuristic during conflict situations are assumed to
differ from Bayesian updaters with regard to N2 amplitude.
Epstein et al. (1996) discovered interindividual differences regarding trust in the
experiential system (faith in intuition) and enjoyment of cognitive activity (need for
cognition). Since faith in intuition corresponds to heuristic processing as described by
Kahneman et al. (1982), it can be presumed that individuals high in this concept rely on
conservatism as well as on the representativeneness heuristic to a greater extent than
Bayesian updaters.
In order to test these ideas, participants of the present study made a series of
decisions in a paradigm similar to that of Grether (1980). They observed that, from one of
two urns with certain prior probabilities, balls were drawn with replacement. Their task
was to determine (guess) from which urn the balls had been drawn. For every correct guess
they received 6 cents. The participants‟ EEG was recorded while they were presented with
prior probabilities and evidence on which to base their decision. As dependent variables,
error rates, response times, and the LRP and the N2 in response to stimuli were measured.
Various person characteristics were assessed by means of a questionnaire.
One of the research aims was to replicate the results obtained by Grether (1980): a
low error rate when Bayesian updating and the representativeness heuristic are aligned and
a high error rate when the two strategies are opposed. The present study also aimed to shed
light on the processes underlying participants‟ decision behavior. Therefore, on the one
hand, response times were investigated. Short response times and an association between
fast responses and correct decisions were expected for situations in which the
representativeness heuristic and Bayesian updating were aligned. In contrast, longer
response times and an association between fast responses and incorrect decisions were
expected for situations in which the two strategies were opposed. On the other hand, the
processing of prior probability information and sample information was investigated by
analysis of the LRP and the N2. According to the conflict-monitoring theory (Yeung et al.,
2004), the N2 amplitude on correct trials39 was expected to be especially pronounced in
39
In line with previous studies (e.g., Bartholow et al., 2005; Enriquez-Geppert et al., 2010; Fritzsche et al.,
2010), only trials with correct reactions were used for all N2 analyses (see also below), following the
assumption that the N2 reflects conflict monitoring before a correct response (Yeung et al., 2004; Yeung &
Cohen, 2006).
Study 2
109
situations with a conflict between the rather automatic representativeness heuristic and the
more controlled process of Bayesian updating. Furthermore, a negative relationship was
expected between an individual‟s proneness to the representativeness heuristic and the
magnitude of his/her N2 amplitude on correct conflict trials. Conservative decision makers
were expected to exhibit a strong LRP, indicating a response preparation for the urn with
the higher prior probability before presentation of further evidence. Therefore, a positive
relationship was expected to exist between the participants‟ rate of conservative errors and
their LRP amplitude (immediately preceeding the presentation of sample evidence).
Moreover, the relationship between person characteristics (e.g., skills in statistics, faith in
intuition) and decision behavior was investigated in the same manner as in Study 1a and
1b. The decisions of participants with high vs. low statistical skills were expected to align
better with Bayesian updating. Faith in intuition was expected to be relevant concerning
the two simple decision strategies. Specifically, it was predicted that intuitive participants
would rely more strongly on conservatism and the representativeness heuristic than
participants who describe themselves as relying less on their gut feelings when making
their decisions.
Thus, the hypotheses of Study 2 were the following:
H1: The representativeness heuristic affects decision outcomes: Compared to
situations of equal difficulty in which the heuristic does not apply, error rates are lower
when the representativeness heuristic leads to the correct choice and higher when it leads
to the wrong choice.
H2: The representativeness heuristic influences response times: Situations with a
decision conflict due to the representativeness heuristic involve longer response times than
situations of equal difficulty without such a conflict. In contrast, situations in which
Bayesian updating and the heuristic are aligned involve shorter response times than
situations of equal difficulty in which the heuristic does not apply.
H3: In situations where the representativeness heuristic leads to the correct choice,
fast decisions are associated with fewer errors than slow decisions. In contrast, when the
heuristic leads to the wrong choice, fast decisions are associated with more errors.
H4: Skills in statistics and faith in intuition influence decision making: Error rates
are generally lower for participants with higher statistical skills. Participants high in faith
in intuition commit more conservative errors than participants low in faith in intuition.
Moreover, intuitive participants commit more errors in situations where the
Study 2
110
representativeness heuristic conflicts with Bayesian updating, compared to participants low
in faith in intuition.
H5: By investigating the LRP, one can identify conservative participants who
overweight priors and underweight sample evidence: Participants in which response
activation (left vs. right) varies strongly with the prior probabilities of the urns have higher
rates of errors in situations where conservatism conflicts with Bayesian updating.
H6: In situations where the representativeness heuristic conflicts with Bayesian
updating, there is a higher degree of response conflict as indicated by the amplitude of the
N2 compared to situations of equal difficulty in which the heuristic does not apply.
H7: By investigating the N2 amplitude, one can identify participants who are better
able to detect conflicts between the representativeness heuristic and Bayesian updating
than others: Participants with a strong vs. weak N2 amplitude, while in a conflict situation,
have lower rates of errors in these situations.
2.3.2 Method
Participants
26 participants (13 male, 13 female) with normal or corrected-to-normal vision,
ranging in age from 19 to 34 years (M = 21.9, SD = 2.89), were recruited from the student
community at the University of Konstanz. Students majoring in economics were not
allowed to take part in the study. Participants were compensated with 5 € plus a monetary
bonus that depended upon the outcomes of the computer task, as described below. All
participants signed an informed consent document before the experiment started. The
individual sessions took approximately 150 minutes.
One participant was excluded from the data analysis for the following reasons: As
determined by the experimente, this person had a very poor comprehension of the task‟s
rules, did not accomplish the task properly, and exhibited a low motivation to perform
well.
Study 2
111
Design
The study followed a 2 between (counterbalance: majority of blue balls vs. majority
of green balls) x 3 within (decision situation: alignment vs. conflict vs. neutral) 40 mixedmodel design.
Dependent variables were error rates, response times, and ERPs (LRP, N2)
measured during the computer task, and different person variables (e.g., faith in intuition,
skills in statistics and grade in the Abitur/mathematics) which were assessed by a final
questionnaire (see below).
Material
All participants were run individually by two female experimenters in a laboratory
at the University of Konstanz. Each participant was seated in a soundproof experimental
chamber in front of a desk with a computer monitor and a keyboard.
The experiment was run on a personal computer (3RSystems K400 BeQuiet; CPU: AMD
Athlon 64x2 Dual 4850E 2.5 GHz; 2048 MB RAM; graphic board: NVIDIA GeForce
8600 GTS with 256 MB RAM; hard drive: Samsung HDD F1 640GB SATA; operating
system: Windows XP Professional) using Presentation® software 12.2 (Neurobehavioral
Systems, Albany, CA).
Stimuli were shown on a 19‟‟ computer monitor (Dell P991, resolution: 1024 x 768
pixels, refresh rate: 100 Hz) at a distance of about 50 cm. The two keys on the keyboard
(Cherry RS 6000) that had to be pressed in order to select an urn („F‟ key and „J‟ key) were
marked with glue dots to make them easily recognizable.
Stimuli were images of colored balls (blue and green, image size 40 x 40 pixels)
shown on the screen of the computer monitor. Additionally, numbers representing further
information about the rules of the task were presented (see Figure 23).
40
Alternatively, a different classification of decision situations is possible (see below). Following this
approach, the design was a 2 between (counterbalance: majority of blue balls vs. majority of green balls) x 4
within (decision situation: alignment vs. conflict vs. conservative vs. neutral) mixed-model design.
Study 2
112
Figure 23. Stimulus picture on the computer screen if the majority of balls is blue: Two urns with
blue and green balls; besides the urns their assigned number(s); in-between the urns fixation
square.
In order to reduce eye movements, all stimuli were kept centered in the middle of the
screen within a space of 220 x 220 pixels. To control for effects of color brightness on
EEG data, the blue and green balls were of the same brightness (blue: RGB value 0-0-255;
green: RGB value 0-127-0).
Procedure
Participants individually arrived at the laboratory where they were greeted by the
female experimenter and were asked to sit down in front of the PC. They gave written
informed consent for the experiment and their handedness was tested by means of a short
questionnaire.
After application of the electrodes, each participant was asked to read through the
instructions which explained the experimental set-up (see appendix E). The instructions
were very elaborate and described the rules of the decision task in detail, also showing
screen shots of the computer program. The participant was informed that, after reading
through the instructions, he/she would be required to explain to the experimenter the
experimental set-up and rules, and would also have to answer questions concerning the
mechanism of making choices. The experimenter paid attention to whether the central
aspects had been comprehended and clarified any misconceptions that the participant had
with the rules or mechanisms. However, the experimenter did not answer questions
concerning potential payoff maximizing strategies or statistical considerations. Finally, by
means of an experimental protocol, the experimenter made a global assessment of the
participant‟s understanding (see appendix E) and asked the participant for his/her
prediction of the amount of money he/she would earn throughout the computer task. This
Study 2
113
elaborate proceeding was meant to ensure that every participant had complete
understanding of the experiment. In addition, participants were instructed to move as little
as possible during the computer task, to keep their fingers above the corresponding keys of
the keyboard, and to maintain their gaze focused at the fixation square in the center of the
screen.
Each participant, while under supervision of the experimenter, completed six
practice trials in order to become accustomed to the computer program. Correct decisions
in these trials were not rewarded. During the first three practice trials, the temporal
sequence of events was decelerated. The remaining three practice trials did not differ from
the subsequent 600 experimental trials41.
After the experimenter was convinced that the participant completely understood
the whole procedure, the Bayesian updating experiment was started, during which the EEG
was recorded. For the duration of the computer task, the participant was alone in the
soundproof experimental chamber.
Very similar to the experiments reported by Grether (1980, 1992), samples were
drawn from one of two populations with known compositions. There was a fixed and
known prior used to select between the populations. Participants had to name the
population they thought the sample was drawn from. Assuming participants were
motivated to give the correct answer (because their receipt of a reward was dependent upon
their giving of correct answers, see below), the choice of either population indicated that
the participants‟ posterior probability of that population exceeded at least one half.
Precisely, the participant‟s task required them to guess from which of two possible
urns the computer had drawn some balls. Each of the two urns was filled with a mixture of
4 blue and green balls42. The distribution of balls in the urns depended on the
counterbalance condition (see Figure 24). In counterbalance condition 1, the left urn was
filled with 3 blue balls and 1 green ball and the right urn was filled with 2 blue balls and 2
green balls. In counterbalance condition 2, the left urn contained 3 green balls and 1 blue
ball, while the right urn contained 2 green balls and 2 blue balls.
41
To collect enough EEG data in order to analyze ERPs, participants were required 600 trials, instead of
about 16 trials in the study by Grether (1980; see Harrison, 1994). This high number of trials allowed for
subdividing the collected data depending on the particular decision situations, and thereby for analyzing
different kinds of errors committed by the participants.
42
In the studies by Grether (1980, 1992), the urns were filled with 6 balls. To make sure that decision
behavior would be similar with only 4 balls, a pretest without EEG measurement was run which confirmed
this assumption.
Study 2
114
Counterbalance 1
Counterbalance 2
Left Urn
Right Urn


3 blue, 1 green
2 blue, 2 green


3 green, 1 blue
2 green, 2 blue
Figure 24. Distribution of balls in the two urns depending on counterbalance condition.
From the instructions, participants knew that the computer would first draw a random
number between 1 and 4 which would not be revealed to the participant. According to a
variable decision rule (which participants were aware of), the computer would
subsequently choose one of the urns.
In every trial, participants first received information about the rule of the current
trial. The decision rule could be, for example: If the random number is within the range of
1 to 3, the left urn will be chosen. If the number is 4, the right urn will be chosen. This rule
provided the prior to select between the two urns. The prior probabilities for the left urn
were 1/4, l/2 and 3/4 and occurred in a random order for each participant43. Then
participants were presented with a sample of four balls that had been “drawn” (with
replacement) randomly by the computer from the urn that had been previously chosen.
Participants had to guess from which urn the balls had been drawn and to indicate their
decision by pressing one of two distinct keys of the keyboard („F‟ key for the left urn and
„J‟ key for the right urn). Participants were allowed to take as long as they wished for their
decisions. No feedback was given about the correctness of the choice.
There were 600 trials divided in 6 parts, with a break of two minutes between two
parts. For every correct guess, the participant received 6 cents, whereas they received
nothing for a wrong guess. Thus, the maximum amount of money that could be earned was
36 €.
At the beginning of a trial, a fixation square (image size 11 x 11 pixels) was shown
in the middle of the screen for 500 ms. Subsequently, for a duration of 2000 ms,
information about the current rule of the task and the distribution of balls in the urns was
presented (see Figure 25):
43
Priors were not equally frequent, though. Prior probabilities of 1/2 occurred with a probability of 1/4, while
the other two priors occurred with a probability of 3/8 each. This distribution of prior probabilities was
implemented in order to collect enough data for ERP analysis from decision situations of particular interest.
Study 2
115
Figure 25. Stimulus picture on the computer screen if the majority of balls is blue: Here, the left
urn is filled with three blue balls and one green ball and is selected if the computer randomly draws
the number 1 (prior probability of 25 %). The right urn is filled with two blue and two green balls
and is selected if the computer randomly draws the number 2, 3 or 4 (prior probability of 75 %).
After 2000 ms, this information was replaced by a blank screen for 500 ms, followed by a
fixation square for another 500 ms. Then, the 4 drawn balls simultaneously appeared, one
below the other (see Figure 26):
Figure 26. Stimulus picture on the computer screen: Exemplary sample of drawn balls.
The screen remained this way until the participant pressed either the „F‟ key (left urn) or
the „J‟ key (right urn) to indicate a decision. 200 ms after a selection was made, the balls
disappeared. After an inter-trial-interval of 1500 ms (blank screen), the next trial started.
Figure 27 provides an overview of the sequence of events in the computer experiment:
Study 2
116
Drawn Balls / Sample (RT + 200 ms)
Fixation Square (500 ms)
Time
Blank Screen (500 ms)
Information (2000 ms)
Fixation Square (500 ms)
Figure 27. Schematic representation of the temporal sequence of events during one trial of the
computer task.
When all 600 experimental trials were completed, the amount of money earned during the
task was displayed on the screen, along with details of how many decisions had been
correct. Depending on the time the participant took for his/her decisions, the experiment
lasted about 70 minutes. When the experimental procedure was completed, the cap and
external electrodes were removed from the participant.
After the computer experiment, participants filled out a questionnaire (see appendix
E) which investigated several control variables: faith in intuition and need for cognition
(Rational-Experiential Inventory [REI]; Epstein et al., 1996; German version: Keller et al.,
2000), preference for intuition and deliberation (PID; Betsch, 2004), locus of control (10item version of the SOEP; see Siedler, Schupp, Spieß, & Wagner, 2008), the Big Five
Inventory-SOEP (BFI-S; Gerlitz & Schupp, 2005), a 3-item general numeracy scale
(Lipkus, Samsa, & Rimer, 2001), a subjective numeracy scale (SNS; Fagerlin et al., 2007).
Study 2
117
Furthermore, participants‟ subjective value of money was measured by asking the
following questions: “Stellen Sie sich vor, Sie hätten 150 € im Lotto gewonnen. Wie stark
würden Sie sich über den Gewinn freuen?“ [“Imagine you have won € 150 in the Lotto.
How intense would be your joy?“]; “Stellen Sie sich vor, Sie hätten 30 € verloren. Wie
stark würden Sie den Verlust bedauern?“ [“Imagine you have lost € 30. How intense would
be your sorrow?“]; “Stellen Sie sich bitte Folgendes vor: Sie haben gerade ein
Fernsehgerät für 500 € gekauft. Ein Bekannter erzählt Ihnen, dass er nur 450 € für genau
das gleiche Gerät bezahlt hat. Wie stark würden Sie sich deswegen ärgern?“ [“Imagine you
have just bought a TV-set for € 500. Now, an acquaintance tells you that he paid only €
450 for exactly the same TV-set. How intense is your anger?“] (see Brandstätter &
Brandstätter, 1996). Participants were asked to score their response to these question by
placing a mark on an analog scale (length: 10 cm), continuously ranging from 0 (“gar
nicht” [“I wouldn‟t care at all”]) to 10 (“sehr” [“I‟d be extremely delighted/upset/angry”]).
Moreover, participants assessed their skills in statistics and stochastics by answering the
following questions, again by placing a mark on a 10 cm continuous analog scale: “Wie
schätzen Sie Ihre allgemeinen statistischen Vorkenntnisse ein?” [“How would you judge
your previous knowledge in statistics?”]; “Wie schätzen Sie Ihre Fähigkeiten in der
Berechnung von Wahrscheinlichkeiten ein?”[“How would you judge your skills in
calculating probabilities?”]. Answers ranged from 0 (“sehr schlecht” [“very poor”]) to 10
(“sehr gut” [“very good”]). In addition, the following variables were assessed: the
experiment‟s difficulty; effort made to complete the task; the self-perceived importance of
performing well; decision strategies utilized by the participant; general questions about the
EEG measurement. Finally, participants were asked for demographic information.
After the participant had finished the questionnaire, he/she was given the
opportunity to wash his/her hair. Finally, participants were thanked, paid, and debriefed.
Mathematical Background
Ignoring the order of the drawn balls, there are five possible outcomes (zero
through four 1-balls44) and three priors, resulting in 15 possible decision situations. The
sample size of four and sample proportions were picked so that the probability of picking a
sample that resembled one of the parent populations (CB 1: 3 blue balls and 1 green ball or
2 blue balls and 2 green balls; CB 2: 3 green balls and 1 blue ball or 2 green balls and 2
blue balls) was large. The idea behind this was that if the representativeness hypothesis is
44
A 1-ball is a blue ball in counterbalance condition 1 (majority of blue balls) and a green ball in
counterbalance condition 2 (majority of green balls).
Study 2
118
correct, participants would tend to think that such samples came from the population they
resembled even though the actual posterior odds favored the contrary.
Table 13 gives the posterior odds for the left urn for each combination of outcome
and prior45. Table 14 presents the posterior probabilities of the left urn for each outcomeprior combination (probabilities greater than 0.5 indicate that choosing the left urn is the
correct choice according to Bayesian calculations, whereas probabilities smaller than 0.5
indicate that choosing the right urn is the correct choice46).
Table 13
Posterior odds for the left urn depending on prior probability and number of 1-balls
Number of 1-balls
Prior
0
1
2
3
4
3/4
0.19:1
0.56:1
1.69:1
5.06:1
15.19:1
1/2
0.06:1
0.19:1
0.56:1
1.69:1
5.06:1
1/4
0.02:1
0.06:1
0.19:1
0.56:1
1.69:1
Table 14
Posterior probabilities of the left urn depending on prior probability and number of 1-balls
Number of 1-balls
Prior
0
1
2
3
4
3/4
0.16
0.36
0.63
0.83
0.94
1/2
0.06
0.16
0.36
0.63
0.83
1/4
0.02
0.06
0.16
0.36
0.63
The experiment has been designed in such a way that several outcome-prior combinations
have the same (or nearly the same) posterior odds, but different conditions of
representativeness. Two cells with the same posterior odds might be thought of as
representing equally difficult problems for the participant in the sense of a comparable
computational burden. Thus, all six situations in which the odds favoring the more likely
alternatives are 1.69:l or 1.78:1 may represent virtually equally difficult choices. As a
45
Unless otherwise stated, prior refers to the prior probability of the left urn.
Note that due to the probabilistic nature of the task, a correct choice in terms of Bayesian updating
(choosing the urn with the higher posterior probability) may nevertheless be a wrong choice in terms of the
computer‟s choice (i.e., the urn the computer has actually chosen) and may therefore not be paid, and vice
versa.
46
Study 2
119
measure of inverse difficulty, the corrected odds for the more likely alternative were
computed by subtracting 1 from the corresponding posterior odds. This new measure
indicates how distant the odds are from 1:1. The larger the value of the corrected odds, the
simpler the decision is assumed to be. Table 15 shows the corrected odds for each
outcome-prior combination. The six situations of (virtually) equal difficulty are shaded
grey.
Table 15
Corrected odds as an inverse measure of difficulty depending on prior probability and number of
1-balls
Number of 1-balls
Prior
0
1
2
3
4
3/4
4.25
0.78
0.69
4.06
14.19
1/2
14.67
4.25
0.78
0.69
4.06
1/4
48
14.67
4.25
0.78
0.69
For CB 1, in two of these equally difficult cases the observed data are two blue balls, and
in two cases the data are three blue balls. As interpreted here, the representativeness
hypothesis predicts that when participants observe three (two) blue balls, they will tend to
believe that the sample came from the population it is representative of (i.e., the left (right)
urn). Of the six cases with posterior odds of 1.69:l or 1.78:1, a participant using the
representativeness heuristic will be led to make the correct response in two cases, the
incorrect response in two cases, and be uninfluenced in two cases. A Bayesian decision
maker will equally consider both the prior odds and sample evidence in making choices.
The six situations of interest with equal difficulty are referred to as alignment
(representativeness heuristic leads to the correct choice: prior 1/2 and 2 1-balls; prior 1/2
and 3 1-balls), conflict (representativeness heuristic leads to the wrong choice: prior 3/4
and 2 1-balls; prior 1/4 and 3 1-balls) and neutral (representativeness heuristic does not
apply: prior 3/4 and 1 1-ball; prior 1/4 and 4 1-balls). Table 16 displays the classification
of decision situations. The six situations of equal difficulty are shaded grey.
Study 2
120
Table 16
Classification of decision situations depending on prior probability and number of 1-balls
Number of 1-balls
Prior
0
1
2
3
4
3/4
Neutral
Neutral
Conflict
Alignment
Neutral
1/2
Neutral
Neutral
Alignment
Alignment
Neutral
1/4
Neutral
Neutral
Alignment
Conflict
Neutral
Alternatively, a different classification is possible (see Table 17): Three outcome-prior
combinations are situations in which the most likely alternative is not the urn with the
higher prior probability. This is the case for a prior probability of 3/4 and 0 or 1 1-ball, and
a prior probability of 1/4 and a sample of 4 1-balls. These situations are referred to as
conservative because in these situations, Bayesian updating conflicts with the conservatism
heuristic. If participants commit an error in these cases, they have underweighted sample
evidence and overweighted priors, or have decided according to a simple “base-rate only”
heuristic.
Table 17
Alternative classification of decision situations depending on prior probability and number of 1balls
Number of 1-balls
Prior
0
1
2
3
4
3/4
Conservative
Conservative
Conflict
Alignment
Neutral
1/2
Neutral
Neutral
Alignment
Alignment
Neutral
1/4
Neutral
Neutral
Alignment
Conflict
Conservative
Participants were not informed about optimal choices or strategic considerations. By means
of detailed instructions and thorough questioning, it was ensured that the participants had a
complete understanding of the experimental set-up and rules of the decision task (see
above).
Study 2
121
EEG Acquisition and Analysis
Data were acquired using Biosemi Active II system (BioSemi, Amsterdam, The
Netherlands, www.biosemi.com) and analyzed using Brain Electrical Source Analysis
(BESA) software (BESA GmbH, Gräfelfing, Germany, www.besa.de).
The continuous EEG was recorded during the computer task (for approximately 70
minutes), while participants were seated in a dimly lit, soundproof chamber. Data were
acquired using 64 Ag-AgCl pin-type active electrodes mounted on an elastic cap (ECI),
arranged according to the 10-20 system, and from two additional electrodes placed at the
right and left mastoids. Eye movements and blinks were monitored by electro-oculogram
(EOG) signals from two electrodes, one placed approximately 1 cm to the left side of the
left eye and another one approximately 1 cm below the left eye (for later reduction of
ocular artifact). As per BioSemi system design, the ground electrode during data
acquisition was formed by the Common Mode Sense active electrode and the Driven Right
Leg passive electrode. Electrode impedances were maintained below 20kΩ. Both EEG an
EOG were sampled at 256 Hz. All data were re-referenced off-line to a linked mastoids
reference and corrected for ocular artifacts with an eye-movement correction algorithm
implemented in BESA software.
LRP Analysis
Stimulus-locked data were segmented into epochs from 3100 ms before to 200 ms
after stimulus onset (presentation of sample evidence); the baseline of 100 ms before the
prior was used for baseline correction. Epochs for different prior probabilities were
averaged separately, producing three average waveforms per participant. Epochs including
an EEG or EOG voltage exceeding +/-120 μV were omitted from the averaging in order to
reject trials with excessive electromyogram (EMG) or other interference. Following Eimer
(1998), C3 - C4 difference amplitudes were computed. As a result, activations of left-hand
responses were seen in the LRP waveforms as positive deflections while activations of
right-hand responses were seen in the LRP waveforms as negative deflections. LRP
difference waveforms were computed by subtracting waveforms for different priors from
each other. Grand averages were derived by averaging these waveforms across
participants. On average 34 % of trials were excluded due to artifacts.
To quantify the LRP in the averaged ERP waveforms for each participant, the mean
amplitude during the 100 ms time interval preceeding the onset of the stimulus
(presentation of sample evidence) was calculated. This time window was chosen because it
Study 2
122
reflects participants‟ left-right orientation immediately before the sample is presented and a
decision is required.
N2 Analysis
For analysis of the N2, stimulus-locked data were segmented into epochs from 100
ms before to 1000 ms after stimulus onset (presentation of sample evidence); the baseline
of 100 ms was used for baseline correction. Only trials with correct reactions were used for
data analysis. Epochs locked to conflict stimuli and neutral stimuli were averaged
separately, producing two average waveforms per participant. Epochs including an EEG or
EOG voltage exceeding +/-120 μV were omitted from the averaging in order to reject trials
with excessive electromyogram (EMG) or other interference. The difference wave was
computed by subtracting conflict waveforms from neutral waveforms. Grand averages
were derived by averaging these ERPs across participants. On average 8 % of trials were
excluded due to artifacts.
To quantify the N2 in the averaged ERP waveforms for each participant, the mean
amplitude in the interval 200-320 ms after stimulus onset (presentation of sample
evidence) was calculated. This time window was chosen because previous research has
found the N2 to peak during this period (e.g, Bartholow & Dickter, 2008; Nieuwenhuis et
al., 2003) and because the peak of the N2 occurred at 260 ms in the grand-average
waveform. Consistent with previous studies, the N2 amplitude was evaluated at channel
FCz, where it is normally maximal (e.g., Bartholow, Riordan, Saults, & Lust, 2009;
Nieuwenhuis et al., 2003).
2.3.3 Results
Preliminary Remark concerning the Analysis of Error Rates
As in Study 1, error rates can be analyzed aggregatedly over a group of participants
or individually for each participant. For instance, when comparing the frequency of errors
in different decision situations, there are two possible procedures for the statistical
analysis: One could either compute the percentage of errors in each decision situation
across participants. Or one could compute error rates depending on decision situation for
each participant and then combine the percentages of participants for each situation. For
the reasons described above (see 2.1.3), non-parametric statistics were used to analyze
error rates. As an alternative to a t-test for paired samples, the Wilcoxon signed-rank test
Study 2
123
(Wilcoxon, 1945) was used for the analysis of individual error rates. When analyzing the
aggregate error rates, Fisher‟s exact test was applied as an alternative to a Chi-square test
for 2 x 2 contingency tables or imbalanced tables, for instance of the form {Situation} x
{Correct, Error}. All tests are two-tailed unless otherwise specified.
First Overview
Prior to the main analyses, it was examined how (according to self-evaluations by
the participants) participants had worked on the task, how they had perceived the task, and
how they rated their skill level for math and statistics. Most ratings were given on an
analog scale (length: 10 cm; i.e., scores ranged continuously between 0 and 10).
As assessed by the experimenter, the participants had a very good understanding of
the rules of the experiment (on a 5 cm analog scale: M = 4.21, SD = 0.87) which parallels
the participants‟ opinion of the matter (M = 7.77, SD = 1.47). Participants reported that
they put forth a good deal of effort towards properly accomplishing the experiment (M =
8.70, SD = 1.08), and they also felt that it was important to perform well (M = 7.31, SD =
1.87). The potential to receive a considerable amount of money was self-assessed as a
factor for participants‟ motivation for participation in the study (M = 6.97, SD = 2.10) and
also for performing well (M = 7.48, SD = 2.29). On average, the participants did not
consider the task to be difficult (M = 3.87, SD = 2.23). Self-assessed statistical skills (M =
5.07, SD = 2.59) and skills in calculating probabilities (M = 5.41, SD = 2.42) were
moderate, as already observed in Study 1a and 1b. Concerning the sources of such skills,
most participants cited previous courses of study throughout their education. A few also
mentioned having previously participated in similar experiment and still others reported a
personal interest in statistics. 17 of 25 (68.0 %) participants majored in a subject with
obligatory statistics courses, and 7 of 24 (29.2 %) participants declared they were already
experienced with the type of decision scenario implemented in the experiment.
Participants considered the process of decision making while their EEG was being
recorded to be moderately stressful (M = 5.59, SD = 2.47). Wearing the cap and feeling the
gel on the scalp were not deemed unpleasant (M = 2.00, SD = 2.26). Participants found it
difficult to avoid body movements and eye blinks (M = 7.16, SD = 2.11). Altogether,
participants considered their participation in the study to be moderately exertive (M = 5.72,
SD = 2.02).
On average, participants won 26.5 €, ranging from 24.48 € to 27.84 €. The Bayesian
updating task took about 70 minutes (see Table 18):
Study 2
124
Table 18
Amount won and duration of the computer task
Amount Won in €
Duration in Minutes
M (SD)
min
max
M (SD)
Median (IQR)
min
max
26.54 (0.81)
24.48
27.84
70.32 (3.67)
69.81 (4.53)
65.41
79.86
Equivalence of Groups
Counterbalancing. For half of the participants, the urns were filled with a majority
of blue balls (n = 13) and for the the other half with a majority of green balls (n = 12). It
was tested if this manipulation led to systematic differences between the groups concerning
personal characteristics. Testing revealed no differences between the two counterbalance
conditions (all p-values > .137). Therefore, data of both conditions were pooled.
Nonetheless, as a precaution, the dummy which represents the counterbalance condition
was included in the regression analyses (see below).
Error Rates
Table 19 shows the error rates for all prior-outcome combinations. In Table 20,
error rates for the relevant decision situations are shown:
Table 19
Error frequencies
Number of 1-balls
Prior
0
1
2
3
4
3/4
11.9 % (101)
17.8 % (518)
32.2 % (1379)
2.5 % (2072)
0.4 % (1378)
1/2
2.4 % (123)
4.0 % (531)
9.5 % (1113)
13.4 % (1273)
2.2 % (727)
1/4
1.2 % (256)
0.8 % (1166)
1.5 % (1861)
23.8 % (1680)
28.8 % (737)
Note. Total number of observations in brackets
Study 2
125
Table 20
Error frequencies
Decision Situation
Alignment
Conflict
Neutral
11.57 % (2368)
27.56 % (3059)
24.22 % (1255)
Note. Total number of observations in brackets
In order to test H1, error rates for situations of equal difficulty in which the
representativeness heuristic led to either the correct choice (alignment), the wrong choice
(conflict), and in which the representativeness heuristic did not apply (neutral), were
compared. Fisher‟s exact tests were applied to compare the conditions aggregatedly over
all participants and Wilcoxon signed-rank tests were applied for comparing individual
error rates. Using Fisher‟s exact tests all three comparisions reached significance: In
alignment situations fewer errors were committed compared to neutral situations, p < .001.
The difference in error rates between conflict situations and neutral situations was also
significant, p < .05. When analyzing individual error rates using Wilcoxon signed-rank
tests, the difference between alignment situations and neutral situations (z = 2.44, p < .05)
was still significant. However, by way of this statistical test, error rates in conflict
situations were not significantly higher than error rates in neutral situations, z < 1.
Next, reaction times were investigated in order to test H2 and H3. Table 21 reports the
median response times, classified according to the 15 possible prior-outcome
combinations. Table 22 shows median response times for the three relevant types of
decision situations:
Study 2
126
Table 21
Median response times
Number of 1-balls
Prior
0
1
2
3
4
3/4
782
905.5
949
648
554.5
1/2
648
742
760
856
638
1/4
662
649.5
631
952.5
890
Note. Total number of observations is as in Table 19
Table 22
Median response times
Decision Situation
Alignment
Conflict
Neutral
796
951
905
Note. Total number of observations is as in Table 20
To compare the reaction times of equally difficult alignment situations and conflict
situations with neutral situations, Wilcoxon signed-rank tests were performed. For this
purpose, the mean reaction time47 dependent upon each of the three decision conditions
was computed separately for each participant. Results showed that decisions were made
significantly faster in alignment situations (M = 6.78, SD = 0.33) compared to neutral
situations (M = 6.86, SD = 0.39), z = 1.90, p < .05 (one-sided). The difference between
reaction times of conflict situations (M = 6.88, SD = 0.44) and neutral situations was not
significant, z = 1.09, p = .28.
To test H3, a median split of decisions was computed as follows: For each participant, the
median response time was computed separately for each decision situation. Then,
participants‟ decisions were classified as fast or slow, dependent upon whether the
response times were below or above the individual median, respectively. This subsequently
47
Response times were logarithmed for analysis.
Study 2
127
produced a split of the error rates which is shown in Table 2348. Error rates for the three
relevant decision situations are presented in Table 24:
Table 23
Error frequencies after fast and slow decisions
Number of 1-balls
Prior
Speed
3/4
fast
15.9 % (44)
17.9 % (251) 33.1 % (682) 1.2 % (1029)
0.3 % (678)
slow
8.8 % (57)
17.6 % (267) 31.3 % (697) 3.7 % (1043)
0.4 % (700)
fast
5.5 % (55)
4.3 % (258)
8.9 % (551)
10.7 % (626)
2.2 % (356)
slow
0 % (68)
3.7 % (273)
10.1 % (562) 15.9 % (647)
2.2 % (371)
fast
0 % (119)
0.7 % (576)
0.7 % (920)
21.7 % (835) 30.3 % (363)
slow
2.2 % (137)
0.8 % (590)
2.2 % (941)
25.8 % (845) 27.3 % (374)
1/2
1/4
0
1
2
3
4
Note. Total number of observations in brackets
Table 24
Error frequencies after fast and slow decisions
Speed
Alignment
Conflict
Neutral
fast
9.86 % (1177)
26.83 % (1517)
23.24 % (641)
slow
13.23 % (1209)
28.27 % (1542)
25.24 % (614)
Note. Total number of observations in brackets
A Fisher‟s exact test confirmed that, in situations where the representativeness heuristic led
to the correct choice (alignment), fast decisions were associated with fewer errors than
slow ones, p < .05. Contrary to expectations, error rates for fast and slow decisions did not
differ when the heuristic led to the wrong choice (conflict), p = .37.
48
This involved computing 15, respectively three different medians per individual. Compared to computing
only one median per individual, this method has the disadvantage of relying on smaller sample sizes for the
computation. Nevertheless, this method is statistically more desirable since response times for identical
situations are grouped together.
Study 2
128
LRP
The grand-average difference waveforms (n = 25) from channel C3 - C4 as a function of
prior probability are shown in Figure 28A. As previously stated, activations of left-hand
responses are seen as positive deflections in the LRP waveforms and activations of righthand responses are seen as negative deflections. Upon visual inspection of the waveform it
can be seen that, beginning at about 700 ms after presentation of the prior stimulus,
participants showed a differentiated lateralized activity. Three LRP difference waveforms
were derived by subtracting each LRP waveform from the other waveforms. These are
shown in Figure 28B.
A
B
C3-C4
-1,5
-1
-0,5
0
0,5
Prior 3/4 - Prior 1/2
Prior 1/2 - Prior 1/4
Prior 3/4 - Prior 1/4
-0,5
Amplitude in µV
Amplitude in µV
Prior for Left Urn 1/4
Prior for Left Urn 1/2
Prior for Left Urn 3/4
0
0,5
1
1,5
1
-3000
-2000
-1000
Time in ms
0
-3000
-2000
-1000
0
Time in ms
Figure 28. (A) Grand-average stimulus-locked LRP as a function of prior probability. Time zero
corresponds to the presentation of sample evidence. Time -3000 corresponds to the presentation of
prior probabilities. Positive and negative LRP values indicate hand-specific activation of the left
and right response, respectively. (B) Grand-average stimulus-locked LRP difference waveforms as
a function of subtraction method. Time zero corresponds to the presentation of sample evidence.
Time -3000 corresponds to the presentation of prior probabilities.
As a measure of lateralized activity which depends upon prior probability, participants‟
mean amplitudes of the difference wave (prior 3/4 minus prior 1/4) during the 100 ms time
interval preceeding the onset of the stimulus (presentation of sample evidence) were
calculated. The resulting value reflects the extent to which participants are differentially
oriented towards the response corresponding to the urn with the higher prior probability,
just before the sample evidence is presented. According to H5, this measure should be
predictive of the level of conservatism, i.e., the rate of conservative errors. This assumption
will be tested in the regression analyses.
Study 2
129
Using a median split procedure, participants were classified as having either a low
or high rate of conservative errors49. Figure 29 shows the grand-average difference
waveforms separately for both participant groups.
Low Rate of Conservative
Mistakes (n = 13)
C3-C4
-0,5
-0,5
0
Amplitude in µV
Amplitude in µV
High Rate of Conservative
Mistakes (n = 12)
0,5
1
1,5
2
0
0,5
1
1,5
2
2,5
2,5
-3000
-2000
-1000
0
-3000
-2000
Time in ms
-1000
0
Time in ms
Figure 29. Grand-average stimulus-locked LRP difference waveforms (prior 3/4 minus prior 1/4)
as a function of rate of conservative errors. Time zero corresponds to the presentation of sample
evidence. Time -3000 corresponds to the presentation of prior probabilities.The more positive the
value of the LRP difference amplitude, the stronger is the hand-specific activation of the response
with the higher prior probability.
N2
As a measure of the response conflict produced by the representativeness heuristic,
conflict vs. neutral situations followed by correct decisions were compared. As previously
mentioned, both types of situations are assumed to be approximately equal in their degrees
of difficulty. Additionally, in both types of situations there is some type of response
conflict because the sample evidence does not resemble the urn with the higher prior
probability. Nonetheless, it is hypothesized that, in situations where the representativeness
heuristic applies, a higher level of conflict is experienced because a rather automatic
process conflicts with the more controlled process of Bayesian updating. This difference
should be detectable in the N2 amplitude which is expected to be more strongly
pronounced for conflict situations compared to neutral situations.
Figure 30A shows the grand-average waveforms (n = 25) from channel FCz as a
function of decision situation. Upon visual inspection it can be seen that beginning at ca.
200 ms after the presentation of the sample stimulus, the two waveforms propagated
differently. Visual inspection further is suggestive of three distinct components which
49
Median percentage of conservative errors was 12.8 %.
Study 2
130
differed between conflict and neutral situations: a frontocentral N2 peaking at about 260
ms poststimulus which was more pronounced for conflict situations; a central P3a peaking
at about 350 ms and a parietal P3b peaking at about 520 ms which were more pronounced
for neutral situations. Topographic mapping of the difference waveform in the
corresponding time frames provided further evidence for these three components (see
Figure 30D). Figure 30B shows the grand-average difference waveform (n = 25) from
channel FCz which is calculated by subtracting the conflict waveform from the neutral
waveform. The corresponding topography of the N2 is shown in Figure 30C.
A
B
FCz
Conflict
Neutral
-1
0
Amplitude in µV
Amplitude in µV
-2
N2
2
4
6
8
P3a
1
2
3
P3b
10
0
12
4
0
200
400
600
0
800
200
Time in ms
400
600
800
Time in ms
C
2 µV
0
-2
D
N2 (235-285 ms)
P3a (325-375 ms)
P3b (495-545 ms)
5 µV
0
-5
Figure 30. (A) Grand-average N2 as a function of decision situation. Time zero corresponds to the
presentation of sample evidence. (B) Grand-average N2 difference waveform (neutral minus
conflict). Time zero corresponds to the presentation of sample evidence. (C) Topographical
mapping of N2 difference waveform (200-320 ms poststimulus). (D) Spatial distribution of the
difference between conflict and neutral situations as a function of time window.
Study 2
131
In order to test for a difference in the amplitudes of the N2 resulting from conflict and
neutral situations, mean amplitudes of the difference wave between 200 and 320 ms after
stimulus onset were calculated for all participants. Amplitudes were tested against zero
using a one-sample t-test. As expected in H6, this test revealed a significant difference
from zero (M = 1.03, SD = 2.10), t(24) = 2.44, p < .05.
According to H7, the N2 difference amplitude on correct trials (as an indicator of conflict
monitoring in relation to conflicts between the representativeness heuristic and Bayesian
updating) should be predictive of the error rate in conflict situations. This assumption will
be tested in the regression analyses.
Using a median split procedure, participants were classified as having either a low
or high error rate during situations in which the representativeness heuristic conflicted with
Bayesian updating (conflict)50. Figure 31A shows the grand-average waveforms from
channel FCz separately for both groups of participants. The corresponding topographies of
the difference waveforms (neutral minus conflict) are shown in Figure 31B.
50
Median percentage of conflict errors was 16.2 %.
Study 2
132
A
Low Rate of Conflict Mistakes (n = 13)
FCz
Conflict
Neutral
-2
0
-2
Amplitude in µV
Amplitude in µV
High Rate of Conflict Mistakes (n = 12)
2
4
6
8
10
0
2
4
6
8
10
12
12
0
200
400
600
800
0
Time in ms
200
400
600
800
Time in ms
B
4 µV
0
-4
Figure 31. (A) Grand-average N2 as a function of error rate in conflict situations and decision
situation. Time zero corresponds to the presentation of sample evidence. (B) Topographical
mapping of N2 difference waveform (neutral minus conflict) (200-320 ms poststimulus) as a
function of error rate in conflict situations.
Regression Analyses
Data form an unbalanced panel, with 586 to 600 observations51 per participant. Due
to the non-independence of observations within the participants, data were analyzed using
random-effects probit regressions on decisions (error vs. correct decision) and randomeffects linear regressions on response times.
The results of the regression analyses on decisions are presented in Table 25. As
postulated in H1 and consistent with the results of the Fisher‟s test reported above, the
dummy which represents conflict was significant in the predicted direction. The likelihood
of an error was significantly higher when the heuristic was opposed to Bayesian updating,
compared to neutral situations. Also in line with expectations (H1) and with the results of
the tests reported above, the likelihood of an error was significantly lower when the
51
Trials in which a key was pressed before presentation of the sample were excluded from analysis.
Study 2
133
representativeness heuristic led to the correct choice (alignment) compared to neutral
situations. The dummy for conservative situations (in which conservatism conflicted with
Bayesian updating so that correct decisions were in opposition to base-rates) was
significant in the first model such that errors were more likely in these situations. When the
interaction with faith in intuition was included in the regression model, the direction of the
relationship was inverted. As previously observed in Study 1, time (trial number divided
by 600) also had a significant negative effect on the error rate. As a validation of the basic
design, the counterbalance dummy showed no significant effect. An even stronger
validation of the design was the significant effect of corrected odds (as an inverse measure
of decision difficulty): The probability of an error to occur decreased with increasing
corrected odds (i.e., accuracy increased as the simplicity of a decision increased).
However, marginal accuracy (measured as the square product of the corrected odds)
decreased in the corrected odds.
Concerning personal characteristics of participants, a gender effect was discovered.
In the final model, the likelihood of an error was significantly lower for male vs. female
participants. Contrary to expectations (H4), a high level of self-assessed knowledge in
statistics did not significantly reduce the likelihood of an error.The H4 prediction that a
positive relationship would exist between faith in intuition and conservative errors was
confirmed by the regression analyses. However, a high level of faith in intuition
significantly reduced the likelihood of an error in conflict situations (i.e., when the
representativeness heuristic and Bayesian updating were opposed). The predicted joy
which a participant thought he/she would feel upon receipt of a monetary reward was also
a significant predictor of error probability. The greater the joy was predicted to be, the less
likely were errors to occur. In contrast, the predicted degree to which a participant thought
they would feel sad if they were to lose some amount of money (which significantly
affected response times, see below) did not reduce the likelihood of an error.
As postulated in H5 and H7, the dummies representing the interaction between LRP
difference amplitude (prior 3/4 minus prior 1/4) and conservative situations and the
interaction between N2 difference amplitude (neutral minus conflict) and conflict
situations were both significant in the predicted direction. A large LRP difference
amplitude was associated with an increased probability of the occurrence of errors during
conservative situations. Regarding conflict situations, a large N2 difference amplitude
decreased the probability of an error.
Study 2
134
Table 25
Random-effects probit regressions on decision errors (0=correct choice, 1=error)
Variable
Probit 1
Probit 2
Probit 3
Probit 4
Probit 5
Counterbalance
-0.045
(0.152)
-0.228***
(0.055)
-0.244***
(0.011)
0.004***
(0.000)
0.153***
(0.043)
-0.524***
(0.042)
-0.040
(0.153)
-0.231***
(0.055)
-0.198***
(0.014)
0.004***
(0.000)
0.537***
(0.087)
-0.196**
(0.078)
0.467***
(0.091)
0.054
(0.164)
-0.238***
(0.056)
-0.199***
(0.014)
0.004***
(0.000)
1.776***
(0.180)
-0.198**
(0.078)
-1.079***
(0.223)
0.101
(0.156)
-0.237***
(0.056)
-0.199***
(0.014)
0.004***
(0.000)
1.770***
(0.180)
-0.198**
(0.078)
-1.083***
(0.223)
0.173
(0.155)
-0.256***
(0.056)
-0.198***
(0.014)
0.004***
(0.000)
1.993***
(0.183)
-0.196**
(0.078)
-0.865***
(0.226)
-0.121
(0.152)
-0.124
(0.153)
-0.190
(0.161)
-0.015
(0.030)
-0.234***
(0.029)
0.280***
(0.037)
-0.326*
(0.168)
-0.024
(0.030)
-0.233***
(0.029)
0.280***
(0.036)
-0.103*
(0.054)
-0.009
(0.033)
-0.385**
(0.167)
-0.032
(0.029)
-0.247***
(0.030)
0.229***
(0.038)
-0.098*
(0.053)
0.007
(0.033)
Time
Corrected Odds
Corrected Odds2
Conflict (1 = Yes)
Alignment (1 = Yes)
Conservative (1 = Yes)
Personal Characteristics
Gender (1 = Male)
Knowledge in Statistics
Faith in Intuition52,53 x Conflict
Faith in Intuition x Conservative
Joy over Monetary Win
Sorrow about Monetary Loss
Electrocortical Responses
LRP Diff54,55 x Conservative
0.679***
(0.158)
-1.520***
(0.146)
N2 Diff56,57 x Conflict
Constant
log likelihood
Wald test
-0.422***
(0.122)
-3905.200
1175.99***
-0.838***
(0.147)
-3891.385
1256.12***
-0.777***
(0.203)
-3791.878
1437.07***
0.231
(0.589)
-3790.170
1438.64***
0.117
(0.586)
-3725.852
1532.32***
Notes. Number of observations = 14915. Numbers in brackets are standard errors.
*p < .10. **p < .05. ***p < .001
52
Internal consistency as measured by Cronbach‟s alpha was good (alpha = .87).
An alternative model in which faith in intuition was included as a separate predictor variable was not
better.
54
To make this measure more comparable to the other variables, the amplitude was multiplied by a factor of
10.
55
An alternative model in which LRP amplitude was included as a separate predictor variable was not better.
56
See footnote 54.
57
An alternative model in which N2 amplitude was included as a separate predictor variable was not better.
53
Study 2
135
Table 26 presents the results of the linear regressions on response times 58. Decision times
decreased with corrected odds (as an inverse measure of decision difficulty). That is, the
speed with which a decision was made increased as the choice became easier. However,
the marginal benefit regarding response time (measured as the square product of the
corrected odds) was decreasing in the corrected odds. Decisions times accelerated over the
course of the experiment. Compared to neutral situations, conflict (H2) and conservative
situations led to significantly longer response times. In line with predictions (H2),
alignment of the representativeness heuristic and Bayesian updating reduced response
times. Counterbalancing the colors of the balls did not have a significant effect on response
times.
Concerning personal characteristics, males required a significantly longer response
time than females in the final model. A high level of self-assessed knowledge in statistics
significantly reduced response times. While faith in intuition and the participants‟
predicted joy concerning a monetary reward were significant predictors of error likelihood,
they did not significantly affect response times. Participants‟ self-reported sorrow about
losing a certain amount of money was positively associated with response times.
N2 difference amplitude (neutral minus conflict) did not have any effect on
response latencies, whereas LRP difference amplitude (prior 3/4 minus prior 1/4) was
negatively associated with response times, almost achieving statistical significance (p =
.10).
58
Response times were logarithmed for analysis.
Study 2
136
Table 26
Random-effects linear regressions on decision response times (logarithmed)
Variable
GLS 1
GLS 2
GLS 3
GLS 4
Counterbalance
0.043
(0.133)
-0.391***
(0.013)
-0.047***
(0.001)
0.001***
(0.000)
0.033**
(0.013)
-0.131***
(0.010)
0.044
(0.132)
-0.391***
(0.013)
-0.042***
(0.002)
0.001***
(0.000)
0.085***
(0.019)
-0.088***
(0.015)
0.081***
(0.021)
0.052
(0.129)
-0.391***
(0.013)
-0.042***
(0.002)
0.001***
(0.000)
0.085***
(0.019)
-0.088***
(0.015)
0.081***
(0.021)
-0.072
(0.152)
-0.391***
(0.013)
-0.042***
(0.002)
0.001***
(0.000)
0.085***
(0.019)
-0.088***
(0.015)
0.081***
(0.021)
0.215
(0.133)
0.215
(0.132)
0.262*
(0.145)
-0.051**
(0.023)
0.007
(0.049)
-0.010
(0.041)
0.088***
(0.027)
0.334**
(0.153)
-0.042*
(0.024)
0.040
(0.053)
-0.016
(0.041)
0.071**
(0.029)
Time
Corrected Odds
Corrected Odds2
Conflict (1 = Yes)
Alignment (1 = Yes)
Conservative (1 = Yes)
Personal Characteristics
Gender (1 = Male)
Knowledge in Statistics
Faith in Intuition
Joy over Monetary Win
Sorrow about Monetary Loss
Electrocortical Responses
LRP Diff
-0.413
(0.253)
-0.127
(0.263)
N2 Diff
Constant
R2
Wald test
6.944***
(0.101)
0.161
2975.41***
6.888***
(0.101)
0.162
2993.30***
6.558***
(0.545)
0.271
3008.82***
Notes. Number of observations = 14915. Numbers in brackets are standard errors.
*p < .10. **p < .05. ***p < .001
6.583***
(0.538)
0.294
3012.48***
Study 2
137
2.3.4 Discussion
By means of a Bayesian inference task (Grether, 1980, 1992), the conflict between
heuristic and rational decision strategies was examined. Particular focus was on
participants‟ error rates, response times, and on two event-related potentials (LRP and N2).
Additionally, several relevant person variables were assessed (e.g., faith in intuition and
skills in statistics).
The following issues were investigated in the present study: (1) Do error rates differ
between situations when Bayesian updating and the representativeness heuristic are aligned
and situations when these are opposed, compared to situations in which the heuristic does
not apply? (2) Are response times affected by the conflict or alignment between Bayesian
updating and the representativeness heuristic? (3) Are fast decisions more likely to be
correct/incorrect in alignment/conflict situations? (4) Are skills in statistics and faith in
intuition related to any particular type(s) of error? (5) Is the magnitude of the LRP
amplitude (before the presentation of sample evidence) positively associated with
conservative behavior? (6) Is the magnitude of N2 amplitude especially pronounced in
situations in which the representativeness heuristic conflicts with Bayesian updating? (7) Is
there a negative relationship between N2 amplitude and error rate in situations in which the
representativeness heuristic conflicts with Bayesian updating?
Overall, the error rates, response times, and ERPs gave information about different
decision strategies which the participants employed while they were confronted with prior
probabilities as well as further information which could be used to update these priors.
Conflict between the controlled process of Bayesian updating and the rather automatic
process of representativeness significantly affected error rates, response times, and eventrelated potentials. Participants‟ N2 amplitude reflected the extent to which they were able
to detect this particular conflict and suppress the associated automatic response: large
difference amplitudes (neutral situations minus conflict situations) on correct trials were
associated with a lower error rate in situations in which the representativeness heuristic
conflicted with Bayesian updating. Furthermore, a tendency to solely rely on base-rates
(conservatism) was observed in participants‟ LRP amplitude: large difference amplitudes
(prior 3/4 minus prior 1/4) present before the presentation of sample evidence were related
to a high degree of conservatism. Again, faith in intuition was correlated with participants‟
decision behavior. Since intuitive decision makers were prone to making conservative
decisions, they performed better than the less intuitive participants in situations where the
representativeness led to the wrong choice (due to the design of the experiment).
Study 2
138
On the whole, the results support the basic hypothesis that decisions in this
paradigm can be understood as the result of the interaction of several processes: a simple
strategy which considers base-rates only and ignores sample evidence (conservatism), a
rather automatic process corresponding to a heuristic which ignores prior probabilities and
takes into account the sample‟s similarity to the parent population (representativeness
heuristic), and a more controlled process of Bayesian updating which can be considered
rational in an economic sense.
On average, participants reported that they put a lot of effort into properly accomplishing
the experiment and found it quite important to perform well. They considered the task
rather easy and held a very good understanding of the instructions (indicated by selfassessment and experimenter‟s assessment). The performance-related monetary incentive
was cited as a motivating factor for participation and good performance. On average the
experiment took about 70 minutes. Overall, participants considered the experiment to be
somewhat exerting and stressful.
A learning effect was observed in spite of the fact that no feedback was given on the
correctness of the decisions. Regression analyses showed that decision times decreased
during the experiment. Moreover, errors became less likely with increasing trial number.
Thus, performance improved with experience (see also Harrison, 1994). This finding is
consistent with the results of Grether (1980) who also observed an effect of experience
with the problems (defined as having previously been exposed to the same prior-outcome
combination), especially a decrease in the randomness of judgment. Perhaps the
participants‟ understanding of the task‟s underlying principles began to solidify after
developing some familiarity with the task. Or maybe the participants attempted to
determine the best response to each decision problem by further reflecting and gradually
developed more effective strategies even in the absence of feedback.
As a validation of the experimental design, the corrected odds (as an inverse measure of
the difficulty of a decision) had a significant effect on the probability of an error as well as
on response time. When the posterior odds favoring the most likely alternative were close
to 0.5 (as computed by the Bayes‟ rule), decisions were associated with longer response
times and higher error rates. In contrast, accuracy and speed increased with increasing
corrected odds (i.e., with increasing simplicity of a decision), however, also exhibiting
Study 2
139
decreasing returns. This finding strongly suggests that participants‟ behavior was based on
some updating process. It is not believed that participants calculated posterior probabilities
according to the Bayes‟ rule, but they seem to have approximated Bayesian inference.
When this approximation was forthright, decisions were generally fast and correct. But
when the estimation led to the result that both alternatives were almost equally probable,
there was an increased likelihood of errors and response times increased.
Results of pairwise tests and results of the probit regression analyses revealed that,
compared to neutral situations of equal difficulty, decision errors were significantly less
likely to occur when the representativeness heuristic led to the correct choice, and
significantly more likely to occur when the heuristic was opposed to Bayesian updating.
This is in line with the findings of Grether (1980) and confirms H1. As expected, decision
makers committed only a few mistakes when the controlled process of Bayesian updating
and the automatic process of representativeness were aligned. In contrast, whenever the
two strategies were opposed, leading to different outcomes, error rates were significantly
higher. This result is consistent with a dual-process approach (see Evans, 2008; Sanfey &
Chang, 2008; Weber & Johnson, 2009) which assumes that controlled processes often are
perturbed by conflicting automatic processes. The representativeness heuristic is regarded
as a rather automatic process since it corresponds to a process of pattern recognition as
opposed to deliberate calculations (e.g., Klein, 1993); this can be labeled “matching
intuition” (Glöckner & Witteman, 2010a). If a certain pattern is recognized, a feeling or an
action tendency towards a particular response is automatically elicited. Similarity (e.g.,
Tversky & Kahneman, 1983) as a stimulus feature is evaluated during perception and
comprehension as a matter of routine. Therefore, this feature is always accessible in
memory, whereas other attributes are accessible only if they have been evoked or primed.
Thus, similarity is likely to represent an attribute on which a heuristic decision strategy is
based (Kahneman & Frederick, 2002). Actually, the recognition of a certain pattern in the
sample evidence is not relevant for the calculation of posterior probabilities. Nevertheless,
it is salient information which has been activated from memory and may trigger the
automatic process of representativeness (see Glöckner & Betsch, 2008b). The
corresponding response tendency may conflict with more controlled processes, resulting in
an increased probability of errors. Or, it may deliver the correct decision, resulting in a
decreased probability of decision errors.
Study 2
140
It should be noted that Grether‟s (1980) pattern of results concerning error rates
was replicated in the present study despite a couple of modifications to the original
paradigm. Whereas in the original experiment balls were actually drawn from real urns, the
present study was run as a computer experiment. Moreover, the number of trials (i.e.,
decisions) was substantially expanded so that sufficient data for analyzing stimulus-locked
ERPs could be collected59. Further, inter-stimulus- and inter-trial-intervals were employed
in order ro ensure that the neural processing of the presented stimuli was performed and
did not overlap with the presentation of subsequent stimuli (an overlap would have caused
problems for ERP analysis).
H2 postulated that a conflict between the representativeness heuristic and Bayesian
updating would result in prolonged response times. This hypothesis could be confirmed.
The linear regression analyses indicated that a conflict between the heuristic and the
Bayes‟ rule affected the time participants took to decide from which urn the sample was
drawn. Compared to neutral situations of equal difficulty, decisions were made more
slowly when the heuristic was opposed to Bayesian updating. This result is consistent with
the assumptions of the dual-process approach (see Evans, 2008; Sanfey & Chang, 2008;
Weber & Johnson, 2009). If two or more strategies are opposed, the decision conflict puts
a higher demand on cognitive resources, thereby leading to longer response times. H2
further postulated that decisions would be more quickly made in situations where the
representativeness heuristic led to the correct choice, compared to situations of equal
difficulty in which the heuristic did not apply. This prediction was based on the assumption
that, if both decision processes are aligned, the automatic process becomes an efficient
decision shortcut (as processes that are rather automatic should result in shorter response
times; e.g., Lieberman, 2003; Schneider & Shiffrin, 1977; Strack & Deutsch, 2004). This
assumption could be confirmed in both the pairwise tests and in the regression analyses on
response times. Decisions were more quickly made in alignment situations compared to
neutral situations.
Response times were significantly longer in conservative situations where the
correct decision was to choose the urn which had the lower prior probability associated
with it. One possible explanation for this finding could be the following: In the grandaverage LRP waveforms (over all participants) it was observed that participants were
already somewhat oriented towards the urn with the higher prior probability before the
59
Grether (1980) does not say how many decisions were conducted in each session, but the average number
of 16.45 can be derived from the reported total number of participants and decisions (see Harrison, 1994).
Study 2
141
sample was presented. However, if the sample evidence then was strongly in favor of the
other urn, this prior activation had to be inhibited and instead the opposite response had to
be activated, resulting in a delay of the decision time (see Eimer, 1995, or Kornblum,
Hasbroucq, & Osman, 1990).
H3 could only partly be confirmed. The Fisher‟s test supported the assumption that fast vs.
slow decisions were associated with fewer errors when the representativeness heuristic was
aligned with Bayesian updating. In these situations, the automatic process efficiently
delivers the correct decision and can be regarded as a fast and frugal heuristic in the sense
of Gigerenzer and Goldstein (1996). Since there is nothing to be gained over the automatic
process by further deliberating, thinking for too long about the decision is actually
counterproductive (see also Achtziger & Alós-Ferrer, 2010).
Contrary to expectations, fast decisions did not result in more errors than slow
decisions when the representativeness heuristic led to the wrong choice. In conflict
situations, decision errors are consistent with the rather automatic process of
representativeness. Therefore, there should be a tendency for errors to result from quicker
decision making. In contrast, it would seem that for correct trials the participants would
adopt a more controlled response strategy in order to avoid errors. Such controlled
strategies should have led to longer response times because the more analytical a decision
process is, the more time is needed for its employment (e.g., Ariely & Zakay, 2001; see
also Achtziger & Alós-Ferrer, 2010). There are at least two possible explanations for the
null finding: First, fast decisions in these conflict situations might have not only been
decisions which were based on the automatic process of representativeness (leading to a
wrong choice in these situations), but also decisions based on the quick and simple
conservatism heuristic (leading to a correct choice in these situations). Second, fast
decisions were not necessarily solely based on a heuristic but might also have been
decisions based on effective strategies which were applied in an almost automatic way by
participants who were convinced that they had a good understanding of the decision
problem. This assumption would also explain the observation that participants who
reported good statistical skills decided more quickly. This could be because they had
developed certain strategies, the efficacies of which they were sure of. In the same way,
long response times might have not only reflected the controlled process of Bayesian
updating, but might also have been a sign of uncertainty about the correct solution, i.e.,
Study 2
142
self-doubt (see Rotge et al., 2008). These considerations could explain why slow decisions
were not generally associated with lower error rates in conflict situations.
H4 could only partially be confirmed. The regression analyses revealed that a high level of
self-assessed knowledge in statistics did not significantly reduce the likelihood of an error.
As previously stated, the level of statistical skills only affected response times. Better skills
resulted in significantly faster decisions, possibly because these people applied straight
decision strategies and did not need to dwell over each decision. But the lacking effect on
error rates suggests that participants with high skills did not apply more effective decision
strategies. This result was unexpected as the results of Study 1a and Study 1b (see above)
showed that statistical skills had a significant influence on error rates. It is left to
speculation as to why this effect was not found in the present study. It could be an isolated
incident, one which is due to the particular sample of students used in the present study.
This should be tested more thoroughly in future studies. For instance, it would be
recommended to find a better method to determine a participant‟s knowledge in statistics
than self-reportings (e.g., a short test in statistics60).
Furthermore, it was assumed in H4 that participants high in faith in intuition would
be more prone to errors in situations where the representativeness heuristic, and those
where the conservatism heuristic were opposed to Bayesian updating (conflict and
conservative situations, respectively). This postulation was based on the fact that faith in
intuition (Epstein et al., 1996) corresponds to heuristic processing as described by
Kahneman et al. (1982). The regression analyses confirmed the expected positive
relationship between faith in intuition and error likelihood only for conservative situations.
In these situations, a high level of faith in intuition increased the likelihood of an error. In
contrast, a high level of faith in intuition reduced the likelihood of an error when the
representativeness heuristic and Bayesian updating were opposed, a finding exactly
converse to expectations. For example, Klein (2003) explicitly defines intuition as “the
ability to make decisions using patterns to recognize what‟s going on in a situation and to
recognize the typical action scripts with which to react” (p. 13). Glöckner and Witteman
(2010a) speak of “matching intuition” processes. Therefore, intuitive participants were
assumed to rely more strongly on the representativeness heuristic in conflict situations,
compared to non-intuitive persons. Since faith in intuition corresponded to conservative
decision behavior, one possible explanation for the unexpected finding is that for intuitive
60
Performance on a 3-item general numeracy scale (Lipkus et al., 2001) which was included in the final
questionnaire of Study 2 (see appendix E) was not significantly related to error rates.
Study 2
143
participants, conservatism dominated the representativeness heuristic. As already noted,
due to the design of the experiment, in both situations where the representativeness
heuristic led to the wrong choice, conservatism (taking into account base-rates only) led to
the correct choice. Therefore, if intuitive participants were mainly conservative
participants, they would give the correct response in conflict situations. Grether‟s (1992)
observation was that the representativeness heuristic dominated conservatism for all
participants. In the present study, this does not seem to have been the case for highly
intuitive decision makers. As Anderson (1991) pointed out, in the event that there are
several candidates for the role of the heuristic attribute, it is not always possible to
determine a priori which heuristic will govern the response to a particular decision problem
(see also Kahneman & Frederick, 2002). Based on self-reporting data, Keller et al. (2000)
argue that faith in intuition should not generally be equated with heuristic processing as
intuitive people seem to be susceptible to particular heuristic cues and not to others.
In the present study, intuitive people were prone to choose the urn with the higher
prior probability, which is a very simple heuristic that is actually adaptive in conflict
situations. In these situations, relying on intuition obviously enhanced their decision
performance (see Frank et al., 2006). As previously stated, conservatism in updating tasks
seems related to confirmation bias (see Klayman, 1995). Lots of studies in social
psychology have demonstrated that people often tend to accept new evidence which
confirms their prior beliefs, thereby also rejecting or reinterpreting counterevidence (e.g.,
Batson, 1975; Edwards & Smith, 1996; Fiedler, Walther, & Nickel, 1999; Lord, Ross, &
Lepper, 1979). Whether or not intuitive people are also prone to such a bias might be an
interesting issue for further investigation.
As also observed in Study 1a, errors were more likely for female participants even
when controlling for several other person characteristics. Again, gender could perhaps be
confounded with a lot of other variables which were not taken into account in the present
study (see Denmark et al., 1988). Or stereotype threat (Steele & Aronson, 1995) could
serve as another possible explanation for this observed effect.
Moreover, analyses of response times showed that male participants had
significantly slower responses.
As a further result of the regression analyses, participants‟ self-reported sorrow
about the loss of money was positively associated with response times. Maybe this loss
aversion was responsible for longer decision times in these participants as they were prone
to consider more carefully each of their decisions. This consideration is based on the
Study 2
144
argument that decisions of high importance lead to reaching for a higher level of
confidence before finally making a choice (Beach & Mitchell, 1978; Payne et al., 1988,
1992). This appears, then, to be a kind of incentive effect. However, per the probit
regression analyses, these prolonged decision times for high-sorrow-reporting participants
did not result in a significant decrease in decision errors. On the other hand, self-reported
joy over a monetary gain was shown to reduce the likelihood of an error. This could also
be a type of incentive effect: the greater the money-inspired delight a participant has, the
higher their motivation to earn as much money as possible (via participation in the
experiment) will be. Accordingly, a high motivation based on monetary incentives might
have led to controlled Bayesian updating as opposed to heuristic strategies. However, if
this was the case, it did not simultaneously lead to longer response times, which would
have been plausible in this regard. These two items could possibly be regarded as
indicators of a promotion and a prevention focus (Higgins, 1997) regarding monetary
incentives, respectively. Expressing strong joy over a monetary gain might reflect a
promotion focus concerning money (focusing on gaining money through correct
decisions). Strong sorrow about a monetary loss possibly indicates a prevention focus
(focusing on losing/not gaining money through incorrect decisions). People with a
promotion focus have been shown to perform better in difficult tasks, whereas individuals
with a prevention focus have been found to have longer response times due to their
vigilance against errors (e.g., Crowe & Higgins, 1997). This could explain why joy over an
imagined win was positively related to decision performance, whereas sorrow about an
imagined loss was not, but resulted in longer response times. In general, the finding of an
effect of the subjective valuation of money on decision behavior is consistent with the
results of Achtziger and Alós-Ferrer (2010). These authors asked participants how much
money they needed on a monthly basis. This variable was significantly negatively
associated with the likelihood of an error (without affecting response time) in the paradigm
of Charness and Levin (2005), which was interpreted such that participants with a high
need for money were more basically motivated by the monetary incentives provided61.
H5 postulated a relationship between LRP amplitude and decision behavior. Many studies
have found that people often base their decisions solely upon base-rates and ignore further
information (e.g., Dave & Wolfe, 2003; El-Gamal & Grether, 1995; Grether, 1992;
Kahneman & Tversky, 1972; Phillips & Edwards, 1966). This phenomenon has been
61
This item was also included in all studies of the present dissertation, but was not significantly related to
decision behavior.
Study 2
145
labeled conservatism (Edwards, 1968). In the present study it was assumed that
conservatism belongs to the class of heuristics (Gigerenzer & Goldstein, 1996; Kahneman
et al., 1982) because it represents a simple rule of thumb which only extracts very few cues
from the environment and ignores the majority of available information, thereby relying on
only a few cognitive resources (e.g., Gigerenzer, 2007; Gigerenzer & Goldstein, 1996).
It was hypothesized that this simple “base-rate only” strategy (Gigerenzer &
Hoffrage, 1995) would be reflected in participants‟ LRP amplitude. The LRP is an eventrelated potential which indicates the brain‟s left-right orientation and can be seen as a
temporal marker of the moment in which the brain has begun to make a decision for one
response or the other (see review by Smulders & Miller, forthcoming). It has already been
shown that precues which carry information about the hand, with which a subsequent
response will have to be executed, produce a response activation prior to the onset of a
test-stimulus (e.g., Coles, 1989; de Jong et al., 1988; Hackley & Miller, 1995; Leuthold et
al., 2004). Research has also shown that the LRP can be affected by prior probability
information during the foreperiod (e.g., Gehring et al., 1992; Gratton et al., 1990; Miller,
1998; Miller & Schröter, 2002; but see Scheibe et al., 2009), albeit in much simpler
paradigms than the present one.
In H5 it was predicted that conservative participants would be strongly oriented
towards a left-hand or right-hand response depending upon base-rates, even before any
additional information which would also be relevant for a future (Bayesian) decision would
be presented. This hypothesis could be confirmed by the data: The strength of response
activation (left or right) as indexed by the LRP difference amplitude (prior 3/4 minus prior
1/4) before the presentation of sample evidence was a significant predictor of the rate of
conservative errors. How can this finding be brought together with earlier research on
conservatism? Different explanations have been proposed to account for conservatism in
decision making. The present results fit best with the explanations put forward by Navon
(1978) and Winkler and Murphy (1973) who claimed that decision makers do not
anticipate information from the environment to be fully reliable and independent. They
also fit with Peterson and Beach‟s (1967) very similar argument that human decision
makers who are confronted with an uncertain environment where information is often
fallible undervalue the diagnostic impact of the evidence. Winkler and Murphy (1973)
argue that in most real-world decision making situations, information sources are not
independent of each other so that some of the information is redundant. If participants (to
some extent) treat the experimental situation as though it was similar to a realistic
Study 2
146
inferential situation, it follows that they give too little weight to the evidence and behave
conservatively. Following this line of argument, conservatism corresponds to ecological
rationality (the heuristic fits the structure of the information in the environment) as
opposed to normative rationality (Evans & Over, 1996). The extent to which a person
believes in a prior hypothesis after viewing further evidence depends upon both the degree
of prior belief and the diagnosticity of the evidence (Evans, 2007a). Results of the present
study suggest that this diagnosticity is undervalued by conservative decision makers as
they seem to have already made their decision before more evidence (i.e., the sample of
drawn balls) is presented. In contrast, the present findings do not seem to be consistent
with explanations of conservatism in decision making which emphasize processes which
involve taking the sample evidence into account. These include misaggregation of the
components (Edwards, 1968), avoidance of extreme responses (DuCharme, 1970) and
fallible retrieval processes (Dougherty et al., 1999). In the present study, even before
presentation of the sample evidence, conservative participants showed significant distinct
preparedness to make their decision. This was suggested by their LRP which showed that,
depending upon the prior probabilities, these participants already tended to initiate the
pressing of one key or the other. This result suggests that Wallsten‟s (1972) prognosis,
which states that it may be impossible to identify a dominant explanation for conservatism,
is not warranted. Instead, by means of neuroscientific methods it is at least possible to rule
out some of the explanations of conservatism in decision making that have been put
forward in the past. Concerning conservative behavior in decision making, it seems that
people make assumptions which render some of the given information, namely the sample
evidence, irrelevant to their judgments (see also Chase et al., 1998). This approach to
conservative decision making is supported by the present findings.
A negative effect of LRP difference amplitude on response times was also found;
however, it did not reach statistical significance in the final regression model. It is not
surprising that participants who were strongly oriented towards the alternative with the
higher prior probability (conservative participants) tended to be very quick in making their
decision for this alternative. The prior probability information allowed these participants to
prepare their responses already before the onset of the target stimulus (the sample
evidence) (see Gehring et al., 1992). In contrast, non-conservative decision makers did not
yet prepare a response, and instead waited for the sample evidence to appear so that,
overall, their responses took longer.
Study 2
147
It is difficult to decide whether the conservatism heuristic should be regarded as an
automatic or controlled process in the present study. It is conceivable that the presentation
of prior probabilities elicited a tendency to favor the more probable alternative (and a
corresponding motor orientation reflected in the LRP amplitude) in a quasi automatic
fashion. On the other hand, the distinct orientation towards one urn or the other which
developed over several seconds might point to a rather controlled, deliberate process. One
further hint which might suggest that the conservatism heuristic is not an automatic process
comes from the results of the probit regression analyses. Following the dual-process
account (see Evans, 2008; Sanfey & Chang, 2008; Weber & Johnson, 2009), a conflict
between a rather automatic process and a more controlled process is assumed to perturb the
latter, resulting in an increased probability of errors. This was the case for conflicts of
Bayesian updating with the reinforcement heuristic (Study 1a) and the representativeness
heuristic (Study 2), even when controlling for faith in intuition. In contrast, when faith in
intuition was controlled for, error probability was not increased (and instead even
decreased) in conservative situations where Bayesian updating conflicted with the
conservatism heuristic, compared to neutral situations. This may indicate that the
conservatism heuristic is not a process which is automatically activated across all
participants, but rather a more deliberate strategy used by some intuitive people. These
individuals might have intentionally decided to make their guesses based solely (or
predominantly) upon prior probabilities and to ignore (or underweight) the sample
evidence, possibly based on certain assumptions which rendered this evidence irrelevant to
their judgments (see above). Chen and Chaiken (1999) presumed that heuristic processing
encompasses unconscious intuitive judgments as well as the deliberate use of particular
decision rules (see also Kahneman & Frederick, 2002). Some cues (e.g., similarity) might
trigger rather automatic processes (e.g., the representativeness heuristic), whereas others
(e.g., probabilities) might initiate more rule-based heuristic processing (e.g., conservatism)
(see also Keller et al., 2000). Still, this issue cannot be satisfactorily decided based upon
the findings of the present study; it remains a research topic to be investigated in future
studies.
H6 postulated a relationship between N2 amplitude and the amount of conflict in a
decision situation. The amplitude of the N2 component (for a review, see Folstein & Van
Petten, 2008) is thought to reflect response conflict (e.g., Bartholow et al., 2005; Donkers
& van Boxtel, 2004; Nieuwenhuis et al., 2003). During non-conflict trials, primes and
Study 2
148
targets activate the same response tendency, whereas during conflict trials, primes and
targets activate opposing response tendencies, producing a conflict N2 amplitude.
According to the conflict-monitoring theory (Yeung et al., 2004; Yeung & Cohen, 2006),
the N2 should be more pronounced on correct high- vs. low-conflict trials since the N2 is
assumed to be correlated with the amount of conflict in the same way in which ACC
activity increases with increasing conflict (Botvinick et al., 2001; Van Veen & Carter,
2006; but see Mennes et al., 2008). In the present study, both the priors and the sample
evidence activated certain response tendencies during decision making. Correct neutral
trials in which the representativeness heuristic did not apply were compared with correct
conflict trials in which the heuristic led to the wrong choice. In both types of situations
there is, to some extent, a response conflict because the sample evidence activates a
different response than the prior has activated before. Still, it was assumed that participants
would experience higher response conflict in the conflict situations. Following a dualprocess approach (see Evans, 2008; Sanfey & Chang, 2008; Weber & Johnson, 2009),
higher conflict is assumed in situations where the controlled process of Bayesian updating
conflicts with the rather automatic process of representativeness compared to situations in
which this automatic heuristic does not apply. Therefore, over all participants, the
amplitude of the N2 should be larger in conflict vs. neutral situations.
This is exactly what was found by means of a one-sample t-test of the N2
difference amplitudes (neutral situations minus conflict situations) of all participants
against zero. Although both types of situations represented approximately equally difficult
decision problems and in both types of situations there was a response conflict because the
sample evidence activated a different response than the prior had previously activated,
conflict situations elicited a significantly stronger N2 amplitude than neutral situations.
The amplified N2 amplitude in conflict situations points to correspondingly enhanced
activity in the ACC (e.g., Nieuwenhuis et al., 2003) due to the detection of conflict
between a predominantly analytical response tendency and a more heuristic response
tendency (see also De Martino, Kumaran, Seymour, & Dolan, 2006). This finding also
corroborates evidence from cognitive neuroscience which demonstrates that processes of
controlling conflict are set in motion very early in the response stream and operate below
conscious awareness (Berns et al., 1997; Nieuwenhuis et al., 2001). Social psychological
models have also described bias detection as an important aspect of behavioral control, but
most of them (e.g., Wegener & Petty, 1997; Wilson & Brekke, 1994) have emphasized the
deliberative process of detecting bias and deciding to initiate control (but see for example
Study 2
149
Amodio, Devine, & Harmon-Jones, 2008). In contrast, neurocognitive data as well as the
present results suggest that conflict monitoring is a continuously active process which
requires no conscious awareness of a bias (e.g., Botvinick et al., 2001; Miller & Cohen,
2001). When this process detects different conflicting response activations, a resourcedependent regulatory process becomes involved which draws on more deliberate and
conscious processing, is linked to the prefrontal cortex (Kane & Engle, 2002; Krawczyk,
2002; MacDonald et al., 2000; Miller, 2000), and takes more time than the spontaneous
conflict detection as indicated by the modulation of the N2.
N2 components have also been found to be sensitive to manipulations of stimulus
frequency (e.g., Enriquez-Geppert et al., 2010; Nieuwenhuis et al., 2003; see also Jones,
Cho, Nystrom, Cohen, & Braver, 2002; but see Bartholow et al., 2005), suggesting that any
low-frequency event should increase the N2. However, this alternative explanation of the
present findings can be ruled out as neutral situations were less probable and, therefore,
less frequent than conflict situations in the present study (see also below). This means that
if the observed effect was due to stimulus infrequency, the N2 should have been more
pronounced in neutral trials, which is exactly the opposite of what was actually found.
Taken together, these findings strongly suggest that the N2, during correct trials, is an early
indicator of the detection of conflict situations in decision making, independent of the
frequency in which these situations occur.
Moreover, a more pronounced P300 (P3a and P3b components) was found for
neutral vs. conflict situations of equal difficulty. The reduced P300 after an enlarged N2 in
conflict situations may reflect inhibitory processing. Results of several studies suggest that
the P300 amplitude elicited by a condition involving inhibition is smaller than the P300
amplitude elicited by a condition without inhibition (e.g, Chen et al., 2008; Jackson,
Jackson, & Roberts, 1999; Mueller, Swainson, & Jackson, 2009). The pattern of the P300
amplitude in the present study is consistent with these findings. The amplitude of the P300
was less pronounced in conflict situations (which required inhibition of the rather
automatic process of representativeness) compared to neutral situations. An alternative
explanation would be the following: Research has repeatedly shown that the amplitude of
the P300 is highly sensitive to stimulus probability, increasing as a response to a given
stimulus when the subjective probability of this stimulus decreases (e.g., Courchesne,
Hillyard, & Galambos, 1975; Donchin & Coles, 1988; Duncan-Johnson & Donchin, 1977;
Friedman, Cycowicz, & Gaeta, 2001; Johnson & Donchin, 1980). In the present study,
neutral situations were low in frequency (they were presented in about 8.5 % of all trials)
Study 2
150
whereas conflict situations occurred in about 20.5 % of trials. This could also explain the
large P3a and P3b components in response to neutral situations.
Topographic mapping of the difference between conflict and neutral situations
confirmed that the effect found in the time window of the N2 was not just an emerging
P300 component. Differences in P300 amplitude should result in between-condition
differences that are maximal over central (P3a) and posterior (P3b) scalp locations (e.g.,
Polich, 2007), whereas maximal activity of the N2 difference wave was observed over
frontocentral sites, consistent with previous research (e.g., Bartholow et al., 2005; Van
Veen and Carter, 2002; Yeung et al., 2004).
Consistent with H7, individual N2 difference amplitudes (neutral situations minus conflict
situations) on correct trials significantly predicted the error rate in situations where the
representativeness heuristic conflicted with Bayesian updating. The larger the N2
difference amplitude (i.e., the more negative the N2 amplitude in conflict vs. neutral
situations), the less likely were errors to occur in conflict situations. This finding is
consistent with the results of several other studies which found a link between N2
amplitude and individual differences in conflict monitoring and resulting performance
(e.g., Amodio, Master, et al., 2008; Fritzsche et al., 2010; Pliszka et al., 2000; van Boxtel
et al., 2001). Detection of a conflict during response selection (i.e., the simultaneous
activation of incompatible actions) provides an early warning signal that errors are likely to
occur and, therefore, increased attention is required. This seems to also play an important
role in decision making as seen in the present study. In line with theoretical accounts
(Nieuwenhuis et al., 2003; Yeung et al., 2004; Yeung & Cohen, 2006), the distinct N2
difference amplitudes observed for participants with a low vs. high rate of decision errors
during conflict situations might reflect indicators of pre-response conflict monitoring
which is crucial for flexible, correct decision making. Those who perform well are
presumably more effective at detecting conflict which results from an activation of the
inappropriate response elicited by similarity matching (representativeness). Therefore, they
are better able to inhibit these automatic activations, which reduced their error rate in
conflict situations. This means that although conflict situations elicited significantly
stronger N2 amplitudes than neutral situations over all participants (see above),
participants differed in their sensitivity to the particular type of conflict resulting from the
automatic process of representativeness.
Study 2
151
The significant association between individual N2 amplitude and error rate in
conflict situations supports the argument that decision errors in these situations result from
faulty conflict detection (Botvinick et al., 2001; Devine, 1989; Evans, 2006, 2007b; Fiske
& Neuberg, 1990; Frederick, 2005; Glöckner & Betsch, 2008b; Kahneman & Frederick,
2002, 2005; see also Amodio, Devine, & Harmon-Jones, 2008). In contrast, the present
findings do not suggest that errors are due to faulty inhibition processes (Denes-Raj &
Epstein, 1994; Epstein, 1994; Epstein & Pacini, 1999; De Neys & Glumicic, 2008; De
Neys et al., 2008; Moutier & Houdé, 2003; Sloman, 1996).
In summary, results of Study 2 corroborate the conclusions of Study 1a and 1b and support
the relevance of the dual-process perspective to economic decision making. Decision
making can be understood as the interaction of several processes. On the one hand, there
might be a rational strategy corresponding to Bayesian calculations in order to maximize
expected utility. On the other hand, heuristics such as conservatism or the
representativeness heuristic might also play a role in human decision making. Following
the dual-process approach, economic rationality is related to a set of controlled processes
and boundedly rational behavior is mostly linked to a set of automatic or intuitive
processes, such as the representativeness heuristic. Both heuristic strategies of Study 2
were based on partial information about the decision situation and did not consider all
aspects of the decision problem. Whereas the representativeness heuristic is primarily
sensitive to similarity (Tversky & Kahneman, 1983), conservatism is mainly sensitive to
base-rates (Edwards, 1968). Results of Study 2 further illuminated the processes
underlying participants‟ decision behavior. The likelihood of an erroneous decision was
increased when the rather automatic process of representativeness conflicted with the
Bayes‟ rule, and was decreased when the two strategies were aligned. In the same way,
decisions were made more slowly in conflict situations and faster in alignment situations,
compared to situations in which the representativeness heuristic did not apply. Conflicts
between representativeness and Bayesian updating were reflected in an enlarged amplitude
of the N2 component.
Mirroring Study 1a and 1b, interindividual differences were highly relevant to
economic decision making. These require further research in order to shed more light on
the often observed heterogeneity concerning economic behavior. For example, low skills in
statistics and great sorrow about an imaginary monetary loss resulted in prolonged decision
times without affecting performance. Faith in intuition made it more likely that a decision
Study 2
152
maker used a simple heuristic which led to non-optimal behaviour yet also to very effective
behavior, depending on the exact nature of the situation. Furthermore, efficient conflict
monitoring for conflicts between Bayesian updating and the representativeness heuristic
(as reflected in individual N2 amplitudes) increased the probability that conflicts between
the controlled and the more automatic process were detected and consequently resolved in
favor of the former. Individual differences in conservative behavior were associated with
distinct LRP amplitudes.
General Discussion
3
General Discussion
3.1
The Present Research
153
3.1.1 Overview
The present studies combined methods of experimental psychology, neuroscience,
and microeconomics in order to disentangle the processes which underlie human decision
behavior. The particular focus was on the conflict between heuristic and rational strategies
in the context of economic decision making. In two different paradigms, the Bayes‟ rule
represented the rational decision strategy that could be used in order to maximize
individual payoff. The decision tasks were designed in a way that this (rather controlled)
decision strategy often conflicted with simpler decision strategies (e.g., reinforcement
learning, see Study 1a) which partly exhibited rather automatic characteristics. In addition,
an important aim was to investigate if and how certain variables (e.g., personality traits and
skills) moderate or perturb decision making processes, and how interindividual
heterogeneity in decision behavior is reflected in distinct brain activity.
The basic research paradigm on which the presented studies were based is the dualprocess approach in human thinking and decision making (e.g., Bargh, 1989; for recent
reviews, see Evans, 2008, or Weber & Johnson, 2009) which postulates the existence of
two different types of decision processes: fast and effortless automatic processes which
require little or no conscious control, and slow, resource-dependent controlled processes
which require some degree of conscious supervision. It was proposed that a dimension of
different levels of automaticity versus controlledness might correspond to a dimension of
different rationality levels observed in economic research on bounded rationality (Ferreira
et al., 2006). Therefore, it was assumed that in certain situations decision behavior would
be the result of an interaction of (at least) two different processes: one which corresponds
to a more controlled, rational strategy, and another which corresponds to more boundedly
rational behavioral rules. The latter might exhibit rather automatic features and pertain to
the concept of intuition (see Glöckner & Witteman, 2010a).
Three such intuitive, heuristic strategies were addressed in the present research:
conservatism (Edwards, 1968), the representativeness heuristic (Grether, 1980, 1992;
Kahneman & Tversky, 1972), and the reinforcement heuristic (Charness & Levin, 2005). It
was proposed that conflicts between these strategies and Bayesian updating would not only
be detected in participants‟ error rates and response times, but would also be visible in
General Discussion
154
electrocortical activity. Moreover, individual differences in decision making strategies
(e.g., Betsch, 2004, 2007; Bröder, 2000; Epstein et al., 1996; Stanovich, 1999) were
assumed to be reflected in distinct brain activity.
3.1.2 Findings
In Study 1a, the conflict between Bayesian updating and the reinforcement
heuristic, a decision rule corresponding to a simple “win-stay, lose-shift” principle, was
addressed in the paradigm of Charness and Levin (2005; see also Achtziger & Alós-Ferrer,
2010). Participants made a series of two-draw decisions and received a certain amount of
money for every successful decision, with some participants receiving twice as much as
others. EEG was recorded while participants made their decisions and were presented
feedback regarding these decisions. Results showed that the conflict between the two rules
led to increased error rates, especially for people who were prone to trust their intuitive
feelings (Epstein et al., 1996). In contrast, participants with good skills in statistics made
more correct decisions. Participants‟ reliance on the reinforcement heuristic was reflected
in the amplitude of the feedback-related negativity (FRN; Gehring & Willoughby, 2002;
Holroyd & Coles, 2002; Miltner et al., 1997), an event-related potential which is assumed
to reflect the extent of reinforcement learning (Holroyd & Coles, 2002; Holroyd et al.,
2002; Holroyd et al., 2003; Holroyd et al., 2005). The FRN in response to first-draw
negative feedback was more strongly pronounced in participants with a high rate of
reinforcement mistakes, that is, participants who often relied on the reinforcement
heuristic. However, this association was not observed in participants paid with only a very
low amount of money although the amount of monetary incentives did not affect the
decision behavior itself. From the results it was concluded that decision making in this
paradigm results from the interaction of two processes: an automatic reinforcement process
corresponding to a heuristic which ignores prior probabilities, and a more controlled,
deliberative process of Bayesian updating which can be considered rational in an economic
sense.
The aim of Study 1b was to further elucidate the processes underlying decision outcomes
observed in Study 1a. For this purpose, the paradigm was altered in a way that reinforcing
characteristics were removed from feedback on the first draw (see Charness & Levin,
2005; see also Achtziger & Alós-Ferrer, 2010). This alteration of the decision task
General Discussion
155
dramatically reduced rates of decision errors, which implies that errors in Study 1a actually
resulted from conflicts between the Bayes‟ rule and the reinforcing qualities of the
feedback. Performance was strongly affected by the magnitude of monetary incentives
such that those people who received high incentives committed fewer errors than persons
who received a lesser amount of money. Due to the removal of the reinforcing
characteristics of the first feedback of a trial (by not associating this outcome with success
or failure), no FRN was elicited by this first-draw feedback, which strengthens the
assumption that the FRN responses observed in Study 1a were specific responses to the
valence of feedback. Nevertheless, it was observed that participants‟ rate of reinforcement
errors was positively associated with the amplitude of the FRN in response to second-draw
feedback of a trial. This implies that reinforcement experiences still played a role in this
altered paradigm of Study 1b, albeit a significantly reduced one (rates of reinforcement
errors were significantly reduced). Related to this finding, faith in intuition was again
highly associated with the rate of reinforcement errors in Study 1b. In summary, it was
concluded that participants are better able to apply controlled strategies such as Bayesian
updating when simple heuristics such as the reinforcement heuristic are not available in a
given decision situation.
On the whole, results of Study 1a and 1b imply that if feedback which is relevant for a later
decision has an affective quality, a rather automatic reinforcement process which ignores
prior probabilities and is associated with particular brain activity might conflict with a
more controlled process of Bayesian updating leading to rational decisions. This conflict
oftentimes results in detrimental decision making. But when this reinforcing attachment
(i.e., the evaluative quality of the feedback) is removed, the reinforcement heuristic is not
available anymore and cannot perturb processes of Bayesian updating. Therefore, decision
makers are able to apply controlled processes to a much greater extent, resulting in a
decrease of reinforcement errors. However, intuitive decision making is not completely
precluded and is still associated with particular personality characteristics and distinct
brain activity.
In Study 2, the conflict between Bayesian updating and two simple heuristics was
addressed in a paradigm modified on the basis of an earlier experiment by Grether (1980,
1992). In this task rational behavior consistent with the Bayes‟ rule might conflict with
conservatism (Edwards, 1968) wherein people give too little weight to recent experience as
General Discussion
156
compared to prior beliefs, and with the representativeness heuristic (Kahneman & Tversky,
1972) wherein people judge the probability of a hypothesis by considering how much the
hypothesis resembles available data. Participants observed how balls were drawn with
replacement from one of two urns with different prior probabilities. They had to guess
from which urn the balls were drawn and received a small amount of money for every
correct guess. EEG was recorded while they were presented with prior probabilities and
sample evidence and made their decisions. Results showed that conflict between the
controlled process of Bayesian updating and the automatic process of representativeness
led to higher error rates and response times.
Furthermore, this conflict elicited a strongly pronounced N2, an event-related
potential which is assumed to reflect the amount of response conflict (e.g., Bartholow et
al., 2005; Donkers & van Boxtel, 2004; Nieuwenhuis et al., 2003; but see Mennes et al.,
2008). Consistent with previous research (e.g., Amodio, Master, et al., 2008; Fritzsche et
al., 2010; Pliszka et al., 2000; van Boxtel et al., 2001), the extent to which participants
were able to detect this particular conflict (and consequently to suppress the automatic
response) was reflected in the magnitude of their individual N2 amplitude.
Furthermore, a conservative decision strategy (staying with the priors even if the
sample evidence strongly favored the other choice) was reflected in participants‟ LRP (de
Jong et al., 1988; Gratton et al., 1988; see Eimer, 1998), an event-related potential which
indicates the brain‟s left-right orientation or action preparation (see review by Smulders &
Miller, forthcoming). Conservatism as indicated by decision errors was especially
pronounced for more vs. less intuitive individuals (Epstein et al., 1996).
All in all it was concluded that decision making in this paradigm resulted from the
interaction of several processes: a simple strategy which only considers base-rates
(conservatism), an automatic heuristic which takes into account the sample‟s similarity to
the parent population (representativeness heuristic), and a more controlled process of
Bayesian updating.
On the whole, results of both studies support the relevance of the dual-process perspective
to economic decision making. Human decision making can be understood as the
interaction of different processes. On the one hand, there is a deliberate strategy
corresponding to Bayesian calculations which maximizes expected utility and is therefore
rational in an economic sense. On the other hand, heuristics such as conservatism, the
representativeness heuristic or the reinforcement heuristic also play a role for human
General Discussion
157
decision making and possibly conflict with Bayesian updating. According to the dualprocess approach, rational strategies are associated with a set of controlled processes, and
boundedly rational strategies are mostly associated with a set of automatic or intuitive
processes. This does not mean that these two types of processes are seen as clear cut
categories in the present thesis, but consistent with Bargh (1994; Gollwitzer & Bargh,
2005), it is assumed that these processes can be localized on a continuum. Automatic
processes can refer both to cognitive and to affective aspects of decision making (Camerer
et al., 2005; Glöckner & Witteman, 2010a), a fact which is also corroborated in the present
thesis: While the reinforcement heuristic is based on rather affective processes of
reinforcement learning, the representativeness heuristic involves rather cognitive
operations of pattern matching.
3.2
Implications, Limitations, and Ideas for Future Research
The recent years have seen an increasing interest in bringing psychology,
economics, and neuroscience together (e.g., Camerer et al., 2005; Loewenstein et al., 2008;
Sanfey, 2007). The main reason for drawing back on psychological and neuroscientific
research in economics is that these disciplines do not treat the human mind as a “black
box”, but as a system that relies on processes which can be uncovered by different methods
(e.g., by recording electrocortical activity). However, decision research relying on EEG
recordings so far investigated mainly simple gambling tasks characterized by binary
choices with clearly defined outcome probabilities (e.g., Goyer et al., 2008; Hajcak,
Holroyd, et al., 2005; Hajcak et al., 2006; Kamarajan et al., 2009; Masaki et al., 2006;
Mennes et al., 2008; Nieuwenhuis et al., 2004; Toyomaki & Murohashi, 2005; Yu & Zhou,
2009). In contrast, the present research focused on tasks which required the participants to
update prior probabilities by taking into account further evidence in order to maximize
expected payoffs. In this context, the conflict between heuristic and rational strategies or
between automatic and controlled process components in economic decision making was
addressed. The investigation of the interaction between automatic and controlled processes
seems highly relevant because the actual human decision maker is most likely found
somewhere between the homo oeconomicus of classical economic theory (who is
unaffected by automatic reactions) and the agent of behavioral or evolutionary gametheoretic models (who follows mindless behavioral rules without reflection). Furthermore,
the focus was on interindividual differences in decision making. Since people differ in their
General Discussion
158
preferences for decision strategies, neglecting these differences and simply aggregating
data across participants does not seem to be an appropriate approach (see Bröder, 2000).
Research on reasoning about probability problems has shown that various details of
the decision problem may strongly influence the quality of solutions (e.g., Brase, 2009;
Erev et al., 2008; Gigerenzer & Hoffrage, 1995; Girotto & Gonzalez, 2001; Grether, 1992;
Lovett & Schunn, 1999; Nickerson, 1996; Ouwersloot et al., 1998; Rapoport, Wallsten,
Erev, & Cohen, 1990; Sanfey & Hastie, 2002). Therefore, the question may be raised as to
how seriously one can take generalizations from computer experiments which are reduced
to statistical aspects and present decision problems in a restricted manner. In spite of this
caveat, it is argued that the paradigms produce theoretically meaningful results because
many everyday situations are characterized by a similar task-set (the integration of prior
knowledge and new information) and a similar decision situation (conflict between
intuitive and rational processing). Under these circumstances many decision makers
behave in a non-optimal yet systematic manner (e.g., Charness & Levin, 2005; Edwards,
1968; El-Gamal & Grether, 1995; Grether, 1980; Kahneman & Tversky, 1972; Ouwersloot
et al., 1998) which points to the operation of several processes. Moreover, some people
consistently respond less optimally than others. Especially the finding that process
components of decision behavior and interindividual differences were clearly associated
with distinct electrocortical activity in all of the reported studies represents a strong
validation of the general significance of the results.
The present research also provided some valuable clues to the question if
individuals are Bayesian – or rather under what circumstances and which individuals are
Bayesian. In light of the present findings, Kahneman and Tversky‟s conclusion that “man
is apparently (…) not Bayesian at all” (1972, p. 450) does not seem warranted. It is
intuitively clear that most human decision makers do not explicitly compute accurate
posterior probabilities unless the task is very simple. Still, Bayesian updating might be a
reasonable model of decision behavior at the aggregate level. In the present investigation,
when the task was designed in a way such that no simple heuristic was directly available
(Study 1b), participants‟ decisions were mostly in line with the Bayes‟ rule. When
heuristics were present and in conflict with Bayesian updating (Study 1a & Study 2),
deviations from Bayesian behavior followed systematic patterns and could be associated
with these particular heuristic strategies (reinforcement heuristic, representativeness
heuristic, conservatism). Moreover, interindividual differences in the reliance on intuitive
General Discussion
159
strategies (reflected in differentiated brain activity and in self-reported thinking styles)
explained much of the variance in decision behavior.
It seems to be the case that although Bayesian updating is often perturbed by
automatic processes which are of more importance for some individuals than for others, it
plays a generally important role in human decision making (see also El-Gamal & Grether,
1995; Griffiths & Tenenbaum, 2006). This fits with growing evidence that the human brain
utilizes principles such as the Bayes‟ theorem for solving a wide range of tasks in sensory
processing, sensorimotor control, and perceptual decision making (Rao, Olshausen, &
Lewicki, 2002). The Bayes‟ rule has turned out to be especially useful in explaining, for
example, sensorimotor integration, force estimation, timing estimation, speed estimation,
and the interpretation of visual scenes (Knill & Richards, 1996; Körding & Wolpert, 2006).
When the brain must combine prior knowledge with current sensory information and
integrate information from different sensory channels based on the noise statistics in these
channels, it does so in a manner that closely approximates the optimum solution as
prescribed by Bayesian statistics (see also Ernst & Bülthoff, 2004; Körding, Tenenbaum,
& Shadmehr, 2007; Krakauer, Mazzoni, Ghazizadeh, Ravindran, & Shadmehr, 2006; for a
recent review, see Chater, Oaksford, Hahn, & Heit, 2010). Also, in complex decision
making tasks in which individuals are unable to calculate a normative answer, choices have
been found to approximate Bayesian solutions quite well (e.g., Glöckner & Dickert, 2008).
As previously stated, automatic processes and intuitive strategies do not always
conflict with rational decision behavior. They often lead to very good results or even
outperform deliberate reasoning (e.g., Frank et al., 2006; Glöckner & Betsch, 2008a,
2008b, 2008c; Glöckner & Dickert, 2008; Hammond et al., 1987; Klein, 1993), especially
for complex and ill-structured problems containing a large amount of information (e.g.,
Dijksterhuis, 2004; Dijksterhuis & Nordgren, 2006). However, the quality of intuitive
decisions strongly depends on circumstances, person factors and task characteristics (for a
review of findings, see Plessner & Czenna, 2008). In the present research, the three
intuitive strategies of interest (reinforcement heuristic, conservatism, representativeness
heuristic) are characterized by consideration of partial information which has been
provided by specific features of the triggering stimulus instead of the consideration of all
aspects of the situation. Consequently the validity of these intuitive judgments depends
upon whether this partial information is sufficient to make a valid decision regarding the
situation at hand, i.e., whether the stimulus feature that triggered the response is a good
predictor of the criterion (Hogarth, 2005).
General Discussion
160
Further research should investigate methods of preventing or minimizing decision
makers‟ use of heuristic strategies under circumstances in which the heuristic is
detrimental to decision performance. The present research tested for the influence of higher
monetary incentives and did not find an effect as long as simple heuristics were available
(Study 1a). This finding is in line with previous research (see, e.g., Camerer & Hogarth,
1999). The fact that statistically skilled participants were better Bayesian updaters than
those with poor skills (Study 1a and 1b) implies that training in probability updating might
increase decision performance. Although previous research mostly failed to find an effect
of training on probability reasoning (e.g., Awasthi & Pratt, 1990; Kahneman & Tversky,
1973; Lindeman et al., 1988; Ouwersloot et al., 1998; Tversky and Kahneman, 1971,
1974), there exist studies which demonstrated the effectiveness of training (e.g., Fong et
al., 1986; Kosonen & Winne, 1995; Lehmann et al., 1988). Another line of research
examined how the reformulation of probabilities as frequencies can induce people to more
frequently make decisions which are aligned with the Bayes‟ theorem (e.g., Cosmides &
Tooby, 1996; Gigerenzer & Hoffrage, 1995; Lindsey, Hertwig, & Gigerenzer, 2003). From
the perspective of motivational psychology, rational Bayesian behavior might also be
supported by corresponding intention formation and action plans. For example, people
might set the goal of maximizing their reward and the plan to not base their decisions upon
their emotions, but rather upon extensive reflection. One kind of planning involves the
formation of implementation intentions described by Gollwitzer in his theory of intentional
action control (Gollwitzer, 1993, 1999). Implementation intentions specify when, where,
and how goal-directed behavior should be initiated by defining a specific situation and
linking this situation to a concrete action. In the sphere of decision making, an
implementation intention could be: “If I have to make a decision, I will take my time and
think about it.”, or “If I am presented feedback about my decision, I will ignore any
emotions arising from it.” Provided that people are commited towards their goal and are
motivated to achieve it (Gollwitzer, 2006), implementation intentions have been identified
as a very effective strategy of self-regulation which helps people achieve their goals even
under adverse conditions (see Achtziger & Gollwitzer, 2007; Gollwitzer & Sheeran, 2006).
The effect of goals and implementation intentions on economic decision behavior and on
electrocortical activity is currently investigated in the present research project.
Interestingly, variables related to the subjective value of money significantly
affected decision behavior in Study 2. The present research was not specifically designed
to evaluate factors related to the subjective valuation of money; therefore, only a few
General Discussion
161
variables pertaining to this topic were measured (Brandstätter & Brandstätter, 1996). A
further study could assess money utility in more detail and could also examine attitudes
towards and beliefs about money (e.g., Furnham, 1984; Tang, 1992, 2007; Yamauchi &
Templer, 1982). It would be interesting to also investigate possible relations between
attitudes towards money and brain activity. For example, Spinella, Yang, and Lester (2008)
found an association between prefrontal cortical dysfunction and money attitudes (as
measured by the scale of Yamauchi & Templer, 1982), namely irrational financial decision
making. People with greater psychological impairments, for instance, saw money as a
symbol for power, but also as a source of worry, and were suspicious regarding monetary
transactions.
Collecting EEG data in addition to error rates and response times in the present
research allowed for a direct assessment of the time course of the processes underlying
participants‟ decision behavior. In this way it was found that conservative participants
made their decision before further evidence was presented (Study 2), and that participants
who were prone to rely on automatic heuristics differed from more rational participants in
very early brain responses, i.e., in an interval of less than 350 ms after stimulus
presentation (Studies 1 and 2). One limitation of the present research is that data from trials
of the same type were averaged across participants, which represents the common
approach in ERP studies. However, averaging leads to considerable loss in the dynamics of
the process of interest (Jung et al., 2001). A more precise procedure would be to predict
behavior on the basis of preceding single-trial variations in the amplitude of particular ERP
components. For instance, one direct prediction of the Reinforcement Learning Theory of
the FRN is that the amplitude reflects the degree of the negative prediction error and
therefore the amount of reinforcement learning (Holroyd & Coles, 2002; Holroyd et al.,
2002; Holroyd et al., 2003; Holroyd et al., 2005). Accordingly, reinforcement errors in the
paradigm of Study 1a should be increasingly probable with increasing FRN amplitude in
response to first-draw negative feedback (see also Cohen & Ranganath, 2007; Yasuda et
al., 2004). The development of an effective single trial approach is ongoing work in the
present research project.
General Discussion
3.3
162
Conclusion
It was demonstrated that conflicts between Bayesian updating and intuitive
strategies in posterior probability tasks affected participants‟ error rates, response times,
and electrocortical activity. Deviations from Bayesian behavior followed systematic
patterns and could be associated with particular heuristic strategies. There were large
individual differences in the extent to which decision makers relied on these simple
strategies as opposed to Bayesian reasoning, which were reflected in distinct brain activity
and self-reported thinking styles. These results are of considerable significance because in
many everyday situations people have to make decisions based on prior knowledge and
new information, and experience a conflict between intuitive and rational processing.
These processes should also be investigated with a single-trial approach by which decision
behavior is predicted on the basis of preceding variations in the amplitude of particular
ERP components. Another possible area for further investigation could be to find effective
methods to prevent or at least reduce decision makers‟ use of detrimental heuristics, e.g.,
by forming goals and implementation intentions. Moreover, future work could be done to
better understand the implications of individuals‟ beliefs about and attitudes towards
money for economic decision making.
References
4
163
References
Achtziger, A., & Alós-Ferrer, C. (2010). Fast or rational? A response times study of
Bayesian updating. Working paper, University of Konstanz.
Achtziger, A., & Gollwitzer, P. M. (2007). Motivation and volition during the course of
action. In J. Heckhausen & H. Heckhausen (Eds.), Motivation and action. (pp. 277-302).
London, England: Cambridge University Press.
Acker, F. (2008). New findings on unconscious versus conscious thought in decision
making: Additional empirical data and meta-analysis. Judgment and Decision Making, 3,
292-303.
Ainslie, G., & Haslam, N. (1992). Hyperbolic discounting. In G. Loewenstein & J. Elster
(Eds.), Choice over time. (pp. 57-92). New York, NY, US: Russell Sage Foundation.
Ajzen, I. (1977). Intuitive theories of events and the effects of base-rate information on
prediction. Journal of Personality and Social Psychology, 35, 303-314.
Akil, M., Kolachana, B., Rothmond, D. A., Hyde, T. M., Weinberger, D. R., & Kleinman,
J. E. (2003). Catechol-O-methyltransferase genotype and dopamine regulation in the
human brain. Journal of Neuroscience, 23, 2008-2013.
Alós-Ferrer, C., & Schlag, K. (2009). Imitation and Learning. In P. Anand, P. Pattanaik &
C. Puppe (Eds.), The handbook of rational and social choice. Oxford, England: Oxford
University Press.
Amodio, D. M., Devine, P. G., & Harmon-Jones, E. (2008). Individual differences in the
regulation of intergroup bias: The role of conflict monitoring and neural signals for control.
Journal of Personality and Social Psychology, 94, 60-74.
Amodio, D. M., Harmon-Jones, E., Devine, P. G., Curtin, J. J., Hartley, S. L., & Covert, A.
E. (2004). Neural signals for the detection of unintentional race bias. Psychological
Science, 15, 88-93.
Amodio, D. M., Master, S. L., Yee, C. M., & Taylor, S. E. (2008). Neurocognitive
components of the behavioral inhibition and activation systems: Implications for theories
of self-regulation. Psychophysiology, 45, 11-19.
Anderson, N. H. (1991). Contributions to information integration theory, Vol. 1:
Cognition. Hillsdale, NJ, US: Lawrence Erlbaum Associates.
Ariely, D., Loewenstein, G., & Prelec, D. (2003). 'Coherent arbitrariness': Stable demand
curves without stable preferences. Quarterly Journal of Economics, 118, 73-105.
Ariely, D., & Norton, M. I. (2008). How actions create – not just reveal – preferences.
Trends in Cognitive Sciences, 12, 13-16.
Ariely, D., & Zakay, D. (2001). A timely account of the role of duration in decision
making. Acta Psychologica, 108, 187-207.
References
164
Awasthi, V., & Pratt, J. (1990). The effects of monetary incentives on effort and decision
performance: The role of cognitive characteristics. The Accounting Review, 65, 797-811.
Balconi, M., & Crivelli, D. (2010). FRN and P300 ERP effect modulation in response to
feedback sensitivity: The contribution of punishment-reward system (BIS/BAS) and
Behaviour Identification of action. Neuroscience Research, 66, 162-172.
Bandura, A. (1977). Social learning theory. Oxford, England: Prentice-Hall.
Bandura, A., Ross, D., & Ross, S. A. (1963). Vicarious reinforcement and imitative
learning. The Journal of Abnormal and Social Psychology, 67, 601-607.
Bar-Hillel, M. (1979). The role of sample size in sample evaluation. Organizational
Behavior & Human Performance, 24, 245-257.
Bar-Hillel, M. (1980). The base-rate fallacy in probability judgments. Acta Psychologica,
44, 211-233.
Barch, D. M., Braver, T. S., Akbudak, E., Conturo, T., Ollinger, J., & Snyder, A. (2001).
Anterior cingulate cortex and response conflict: Effects of response modality and
processing domain. Cerebral Cortex, 11, 848.
Bargh, J. A. (1989). Conditional automaticity: Varieties of automatic influence in social
perception and cognition. In J. S. Uleman & J. A. Bargh (Eds.), Unintended thought. (pp.
3-51). New York, NY, US: Guilford Press.
Bargh, J. A. (1994). The four horsemen of automaticity: Awareness, intention, efficiency,
and control in social cognition. In R. S. Wyer, Jr. & T. K. Srull (Eds.), Handbook of social
cognition (2nd ed.). (pp. 1-40). Hillsdale, NJ, US: Lawrence Erlbaum Associates.
Bargh, J. A. (1997). The automaticity of everyday life. In R. S. Wyer (Ed.), The
automaticity of everyday life: Advances in social cognition, Vol. 10. (pp. 1-61). Mahwah,
NJ, US: Lawrence Erlbaum Associates.
Bargh, J. A., Chaiken, S., Raymond, P., & Hymes, C. (1996). The automatic evaluation
effect: Unconditional automatic attitude activation with a pronunciation task. Journal of
Experimental Social Psychology, 32, 104-128.
Bargh, J. A., & Williams, E. L. (2006). The automaticity of social life. Current Directions
in Psychological Science, 15, 1-4.
Barraclough, D. J., Conroy, M. L., & Lee, D. (2004). Prefrontal cortex and decision
making in a mixed-strategy game. Nature Neuroscience, 7, 404-410.
Bartholow, B. D., & Dickter, C. L. (2007). Social cognitive neuroscience of person
perception: A selective review focused on the event-related brain potential. In E. HarmonJones & P. Winkielman (Eds.), Social neuroscience: Integrating biological and
psychological explanations of social behavior. (pp. 376-400). New York, NY, US:
Guilford Press.
References
165
Bartholow, B. D., & Dickter, C. L. (2008). A response conflict account of the effects of
stereotypes on racial categorization. Social Cognition, 26, 314-332.
Bartholow, B. D., Pearson, M. A., Dickter, C. L., Sher, K. J., Fabiani, M., & Gratton, G.
(2005). Strategic control and medial frontal negativity: Beyond errors and response
conflict. Psychophysiology, 42, 33-42.
Bartholow, B. D., Riordan, M. A., Saults, J. S., & Lust, S. A. (2009). Psychophysiological
evidence of response conflict and strategic control of responses in affective priming.
Journal of Experimental Social Psychology, 45, 655-666.
Barto, A. G., & Sutton, R. S. (1997). Reinforcement learning in artificial intelligence. In J.
W. Donahoe & V. P. Dorsel (Eds.), Neural-network models of cognition: Biobehavioral
foundations. (pp. 358-386). Amsterdam, The Netherlands: North-Holland/Elsevier Science
Publishers.
Batson, C. D. (1975). Rational processing or rationalization? The effect of disconfirming
information on a stated religious belief. Journal of Personality and Social Psychology, 32,
176-184.
Bayes, T., & Price, R. (1763). An essay toward solving a problem in the doctrine of
chances. Philosophical Transactions of the Royal Society of London, 53, 370-418.
Beach, L. R., & Mitchell, T. R. (1978). A contingency model for the selection of decision
strategies. Academy of Management Review, 3, 439-449.
Bechara, A., & Damasio, A. R. (2005). The somatic marker hypothesis: A neural theory of
economic decision. Games and Economic Behavior, 52, 336-372.
Bechara, A., Damasio, H., Damasio, A. R., & Lee, G. P. (1999). Different contributions of
the human amygdala and ventromedial prefrontal cortex to decision-making. The Journal
of Neuroscience, 19, 5473-5481.
Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding advantageously
before knowing the advantageous strategy. Science, 275, 1293-1294.
Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (2005). The Iowa Gambling Task
and the somatic marker hypothesis: Some questions and answers. Trends in Cognitive
Sciences, 9, 159-162.
Bekker, E. M., Kenemans, J. L., & Verbaten, M. N. (2005). Source analysis of the N2 in a
cued Go/NoGo task. Cognitive Brain Research, 22, 221-231.
Bellebaum, C., & Daum, I. (2008). Learning-related changes in reward expectancy are
reflected in the feedback-related negativity. European Journal of Neuroscience, 27, 18231835.
Bellebaum, C., Polezzi, D., & Daum, I. (2010). It is less than you expected: The feedbackrelated negativity reflects violations of reward magnitude expectations. Neuropsychologia,
48, 3343-3350.
References
166
Benabou, R., & Tirole, J. (2002). Self-confidence and personal motivation. Quarterly
Journal of Economics, 117, 871-915.
Benabou, R., & Tirole, J. (2004). Willpower and personal rules. Journal of Political
Economy, 112, 848-886.
Benabou, R., & Tirole, J. (2006). Incentives and prosocial behavior. American Economic
Review, 96, 1652-1678.
Benhabib, J., & Bisin, A. (2005). Modeling internal commitment mechanisms and selfcontrol: A neuroeconomics approach to consumption-saving decisions. Games and
Economic Behavior, 52, 460-492.
Berger, C. R. (2007). A tale of two communication modes: When rational and experiential
processing systems encounter statistical and anecdotal depictions of threat. Journal of
Language and Social Psychology, 26, 215-233.
Bernheim, B. D., & Rangel, A. (2004). Addiction and cue-triggered decision processes.
American Economic Review, 94, 1558-1590.
Bernoulli, D. (1954). Exposition of a new theory on the measurement of risk.
Econometrica, 22, 23-36. (Original work published 1738).
Berns, G. S., Capra, C. M., Chappelow, J., Moore, S., & Noussair, C. (2008). Nonlinear
neurobiological probability weighting functions for aversive outcomes. NeuroImage, 39,
2047-2057.
Berns, G. S., Chappelow, J., Cecik, M., Zink, C. F., Pagnoni, G., & Martin-Skurski, M. E.
(2006). Neurobiological substrates of dread. Science, 312, 754-758.
Berns, G. S., Cohen, J. D., & Mintun, M. A. (1997). Brain regions responsive to novelty in
the absence of awareness. Science, 276, 1272-1275.
Berns, G. S., McClure, S. M., Pagnoni, G., & Montague, P. R. (2001). Predictability
modulates human brain response to reward. The Journal of Neuroscience, 21, 2793-2798.
Betsch, C. (2004). Präferenz für Intuition und Deliberation (PID): Inventar zur Erfassung
von affekt- und kognitionsbasiertem Entscheiden. Zeitschrift für Differentielle und
Diagnostische Psychologie, 25, 179-197.
Betsch, C. (2007). Chronic preferences for intuition and deliberation in decision making:
Lessons learned about intuition from an individual differences approach. In H. Plessner, C.
Betsch & T. Betsch (Eds.), Intuition in judgment and decision making. (pp. 231-248).
Mahwah, NJ, US: Lawrence Erlbaum Associates.
Betsch, C., & Iannello, P. (2010). Measuring individual differences in intuitive and
deliberate decision-making styles: A comparison of different measures. In A. Glöckner &
C. Witteman (Eds.), Foundations for tracing intuition: Challenges and methods. (pp. 251271). New York, NY, US: Psychology Press.
References
167
Betsch, C., & Kunz, J. J. (2008). Individual strategy preferences and decisional fit. Journal
of Behavioral Decision Making, 21, 532-555.
Betsch, T., Haberstroh, S., Glöckner, A., Haar, T., & Fiedler, K. (2001). The effects of
routine strength on adaptation and information search in recurrent decision making.
Organizational Behavior and Human Decision Processes, 84, 23-53.
Betsch, T., Haberstroh, S., Molter, B., & Glöckner, A. (2004). Oops, I did it again –
relapse errors in routinized decision making. Organizational Behavior and Human
Decision Processes, 93, 62-74.
Bilder, R. M., Volavka, J., Lachman, H. M., & Grace, A. A. (2004). The catechol-Omethyltransferase polymorphism: Relations to the tonic-phasic dopamine hypothesis and
neuropsychiatric phenotypes. Neuropsychopharmacology, 29, 1943-1961.
Böcker, K. B. E., Brunia, C. H. M., & Cluitmans, P. J. M. (1994). A spatio-temporal dipole
model of the readiness potential in humans: I. Finger movement. Electroencephalography
& Clinical Neurophysiology, 91, 275-285.
Boksem, M. A. S., Tops, M., Kostermans, E., & De Cremer, D. (2008). Sensitivity to
punishment and reward omission: Evidence from error-related ERP components.
Biological Psychology, 79, 185-192.
Bonner, S. E., Hastie, R., Sprinkle, G. B., & Young, S. M. (2000). A review of the effects
of financial incentives on performance in laboratory tasks: Implications for management
accounting. Journal of Management Accounting Research, 12, 19-64.
Bonner, S. E., & Sprinkle, G. B. (2002). The effects of monetary incentives on effort and
task performance: Theories, evidence, and a framework for research. Accounting,
Organizations and Society, 27, 303-345.
Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S., & Cohen, J. D. (2001).
Conflict monitoring and cognitive control. Psychological Review, 108, 624-652.
Botvinick, M. M., Nystrom, L. E., Fissell, K., Carter, C. S., & Cohen, J. D. (1999).
Conflict monitoring versus selection-for-action in anterior cingulate cortex. Nature, 402,
179-181.
Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: The Self-Assessment Manikin
and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry,
25, 49-59.
Brandstätter, E., & Brandstätter, H. (1996). What's money worth? Determinants of the
subjective value of money. Journal of Economic Psychology, 17, 443-464.
Brase, G. L. (2009). How different types of participant payments alter task performance.
Judgment and Decision Making, 4, 419-428.
Braver, T. S., Barch, D. M., Gray, J. R., Molfese, D. L., & Snyder, A. (2001). Anterior
cingulate cortex and response conflict: Effects of frequency, inhibition and errors.
Cerebral Cortex, 11, 825-836.
References
168
Breiter, H. C., Aharon, I., Kahneman, D., Dale, A., & Shizgal, P. (2001). Functional
imaging of neural responses to expectancy and experience of monetary gains and losses.
Neuron, 30, 619-639.
Brewer, M. B. (1988). A dual process model of impression formation. In T. K. Srull & R.
S. Wyer, Jr. (Eds.), A dual process model of impression formation. (pp. 1-36). Hillsdale,
NJ, US: Lawrence Erlbaum Associates.
Brocas, I., & Carrillo, J. D. (2008). The Brain as a Hierarchical Organization. American
Economic Review, 98, 1312-1346.
Bröder, A. (2000). Assessing the empirical validity of the 'Take-the-best' heuristic as a
model of human probabilistic inference. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 26, 1332-1346.
Bröder, A. (2003). Decision making with the 'adaptive toolbox': Influence of
environmental structure, intelligence, and working memory load. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 29, 611-625.
Bröder, A., & Schiffer, S. (2003). Take The Best versus simultaneous feature matching:
Probabilistic inferences from memory and effects of representation format. Journal of
Experimental Psychology: General, 132, 277-293.
Brown, R. P., & Josephs, R. A. (1999). A burden of proof: Stereotype relevance and
gender differences in math performance. Journal of Personality and Social Psychology, 76,
246-257.
Bruner, J. S., & Goodman, C. C. (1947). Value and need as organizing factors in
perception. The Journal of Abnormal and Social Psychology, 42, 33-44.
Budescu, D. V., & Yu, H.-T. (2006). To Bayes or not to Bayes? A comparison of two
classes of models of information aggregation. Decision Analysis, 3, 145-162.
Burle, B., Allain, S., Vidal, F., & Hasbroucq, T. (2005). Sequential compatibility effects
and cognitive control: Does conflict really matter? Journal of Experimental Psychology:
Human Perception and Performance, 31, 831-837.
Burle, B., Roger, C., Allain, S., Vidal, F., & Hasbroucq, T. (2008). Error negativity does
not reflect conflict: A reappraisal of conflict monitoring and anterior cingulate cortex
activity. Journal of Cognitive Neuroscience, 20, 1637-1655.
Busemeyer, J. R., & Johnson, J. G. (2004). Computational models of decision making. In
D. J. Koehler & N. Harvey (Eds.), Blackwell handbook of judgment and decision making.
(pp. 133-154). Malden, MA, US: Blackwell Publishing.
Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: A dynamic-cognitive
approach to decision making in an uncertain environment. Psychological Review, 100,
432-459.
Bush, G., Luu, P., & Posner, M. I. (2000). Cognitive and emotional influences in anterior
cingulate cortex. Trends in Cognitive Sciences, 4, 215-222.
References
169
Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and
Social Psychology, 42, 116-131.
Cacioppo, J. T., Priester, J. R., & Berntson, G. G. (1993). Rudimentary determinants of
attitudes: II. Arm flexion and extension have differential effects on attitudes. Journal of
Personality and Social Psychology, 65, 5-17.
Camerer, C. F. (1987). Do biases in probability judgment matter in markets? Experimental
evidence. American Economic Review, 77, 981-997.
Camerer, C. F. (2003). Strategizing in the brain. Science, 300, 1673-1675.
Camerer, C. F. (2007). Neuroeconomics: Using neuroscience to make economic
predictions. Economic Journal, 117, C26-C42.
Camerer, C. F., & Ho, T.-H. (1999a). Experience-weighted attraction learning in games:
Estimates from weak-link games. In D. V. Budescu, I. Erev & R. Zwick (Eds.), Games and
human behavior: Essays in honor of Amnon Rapoport. (pp. 31-51). Mahwah, NJ, US:
Lawrence Erlbaum Associates.
Camerer, C. F., & Ho, T.-H. (1999b). Experience-weighted attraction learning in normal
form games. Econometrica, 67, 827-874.
Camerer, C. F., & Hogarth, R. M. (1999). The effects of financial incentives in
experiments: A review and capital-labor-production framework. Journal of Risk and
Uncertainty, 19, 7-42.
Camerer, C. F., Loewenstein, G., & Prelec, D. (2005). Neuroeconomics: How
neuroscience can inform economics. Journal of Economic Literature, 43, 9-64.
Carter, C. S., Braver, T. S., Barch, D. M., Botvinick, M. M., Noll, D., & Cohen, J. D.
(1998). Anterior cingulate cortex, error detection, and the online monitoring of
performance. Science, 280, 747-749.
Casscells, W., Schoenberger, A., & Graboys, T. B. (1978). Interpretation by physicians of
clinical laboratory results. New England Journal of Medicine, 299, 999-1001.
Catty, S., & Halberstadt, J. (2008). The use and disruption of familiarity in intuitive
judgments. In H. Plessner, C. Betsch & T. Betsch (Eds.), Intuition in judgment and
decision making. (pp. 295-307). Mahwah, NJ, US: Lawrence Erlbaum Associates.
Cavanagh, J. F., Frank, M. J., Klein, T. J., & Allen, J. J. B. (2010). Frontal theta links
prediction errors to behavioral adaptation in reinforcement learning. NeuroImage, 49,
3198-3209.
Chaiken, S. (1980). Heuristic versus systematic information processing and the use of
source versus message cues in persuasion. Journal of Personality and Social Psychology,
39, 752-766.
Chamorro-Premuzic, T., & Furnham, A. (2003). Personality traits and academic
examination performance. European Journal of Personality, 17, 237-250.
References
170
Charness, G., Karni, E., & Levin, D. (2007). Individual and group decision making under
risk: An experimental study of Bayesian updating and violations of first-order stochastic
dominance. Journal of Risk and Uncertainty, 35, 129-148.
Charness, G., & Levin, D. (2005). When optimal choices feel wrong: A laboratory study of
Bayesian updating, complexity, and affect. American Economic Review, 95, 1300-1309.
Charness, G., & Levin, D. (2009). The origin of the Winner‟s Curse: A laboratory study.
American Economic Journal: Microeconomics, 1, 207-236.
Chase, V. M., Hertwig, R., & Gigerenzer, G. (1998). Visions of rationality. Trends in
Cognitive Sciences, 2, 206-214.
Chater, N., Oaksford, M., Hahn, U., & Heit, E. (2010). Bayesian models of cognition.
Wiley Interdisciplinary Reviews: Cognitive Science, 1, 811-823.
Chen, A., Xu, P., Wang, Q., Luo, Y., Yuan, J., Yao, D., & Li, H. (2008). The timing of
cognitive control in partially incongruent categorization. Human Brain Mapping, 29, 10281039.
Chen, S., & Chaiken, S. (1999). The heuristic-systematic model in its broader context. In
S. Chaiken & Y. Trope (Eds.), Dual-process theories in social psychology. (pp. 73-96).
New York, NY, US: Guilford Press.
Chwilla, D. J., Kolk, H. H. J., & Mulder, G. (2000). Mediated priming in the lexical
decision task: Evidence from event-related potentials and reaction time. Journal of
Memory and Language, 42, 314-341.
Cohen, J. D., Dunbar, K., & McClelland, J. L. (1990). On the control of automatic
processes: A parallel distributed processing account of the Stroop effect. Psychological
Review, 97, 332-361.
Cohen, M. X. (2007). Individual differences and the neural representations of reward
expectation and reward prediction error. Social Cognitive and Affective Neuroscience, 2,
20-30.
Cohen, M. X., Elger, C. E., & Ranganath, C. (2007). Reward expectation modulates
feedback-related negativity and EEG spectra. Neuroimage, 35, 968-978.
Cohen, M. X., & Ranganath, C. (2005). Behavioral and neural predictors of upcoming
decisions. Cognitive, Affective & Behavioral Neuroscience, 5, 117-126.
Cohen, M. X., & Ranganath, C. (2007). Reinforcement learning signals predict future
decisions. The Journal of Neuroscience, 27, 371-378.
Cohen, M. X., Young, J., Baek, J.-M., Kessler, C., & Ranganath, C. (2005). Individual
differences in extraversion and dopamine genetics predict neural reward responses.
Cognitive Brain Research, 25, 851-861.
References
171
Cokely, E. T., & Kelley, C. M. (2009). Cognitive abilities and superior decision making
under risk: A protocol analysis and process model evaluation. Judgment and Decision
Making, 4, 20-33.
Coles, M. G. (1989). Modern mind-brain reading: Psychophysiology, physiology, and
cognition. Psychophysiology, 26, 251-269.
Compton, R. J., Robinson, M. D., Ode, S., Quandt, L. C., Fineman, S. L., & Carp, J.
(2008). Error-monitoring ability predicts daily stress regulation. Psychological Science, 19,
702-708.
Cosmides, L., & Tooby, J. (1996). Are humans good intuitive statisticians after all?
Rethinking some conclusions from the literature on judgment under uncertainty. Cognition,
58, 1-73.
Courchesne, E., Hillyard, S. A., & Galambos, R. (1975). Stimulus novelty, task relevance
and the visual evoked potential in man. Electroencephalography & Clinical
Neurophysiology, 39, 131-143.
Crowe, E., & Higgins, E. T. (1997). Regulatory focus and strategic inclinations: Promotion
and prevention in decision-making. Organizational Behavior and Human Decision
Processes, 69, 117-132.
Damasio, A. R. (1994). Descartes’ error: Emotion, reason, and the human brain. New
York, NY, US: Putnam.
Danielmeier, C., Wessel, J. R., Steinhauser, M., & Ullsperger, M. (2009). Modulation of
the error-related negativity by response conflict. Psychophysiology, 46, 1288-1298.
Dave, C., & Wolfe, K. W. (2003). On confirmation bias and deviations from Bayesian
updating. Working paper, University of Pittsburgh.
Davies, P. G., Spencer, S. J., Quinn, D. M., & Gerhardstein, R. (2002). Consuming images:
How television commercials that elicit stereotype threat can restrain women academically
and professionally. Personality and Social Psychology Bulletin, 28, 1615-1628.
Davis, C., Patte, K., Tweed, S., & Curtis, C. (2007). Personality traits associated with
decision-making deficits. Personality and Individual Differences, 42, 279-290.
Daw, N. D., & Doya, K. (2006). The computational neurobiology of learning and reward.
Current Opinion in Neurobiology, 16, 199-204.
Daw, N. D., Niv, Y., & Dayan, P. (2005). Uncertainty-based competition between
prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience, 8,
1704-1711.
de Bruijn, E. R. A., Grootens, K. P., Verkes, R. J., Buchholz, V., Hummelen, J. W., &
Hulstijn, W. (2006). Neural correlates of impulsive responding in borderline personality
disorder: ERP evidence for reduced action monitoring. Journal of Psychiatric Research,
40, 428-437.
References
172
De Houwer, J., Hermans, D., & Eelen, P. (1998). Affective and identity priming with
episodically associated stimuli. Cognition and Emotion, 12, 145-169.
de Jong, R., Wierda, M., Mulder, G., & Mulder, L. J. (1988). Use of partial stimulus
information in response processing. Journal of Experimental Psychology: Human
Perception and Performance, 14, 682-692.
De Martino, B., Kumaran, O., Seymour, B., & Dolan, R. J. (2006). Frames, biases, and
rational decision-making in the human brain. Science, 313, 684-687.
De Neys, W., & Glumicic, T. (2008). Conflict monitoring in dual process theories of
thinking. Cognition, 106, 1248-1299.
De Neys, W., Vartanian, O., & Goel, V. (2008). Smarter than we think: When our brains
detect that we are biased. Psychological Science, 19, 483-489.
De Pascalis, V., Varriale, V., & D‟Antuono, L. (2010). Event-related components of the
punishment and reward sensitivity. Clinical Neurophysiology, 121, 60-76.
de Quervain, D. J. F., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck,
A., & Fehr, E. (2004). The neural basis of altruistic punishment. Science, 305, 1254-1258.
de Vries, M., Holland, R. W., & Witteman, C. L. M. (2008). In the winning mood: Affect
in the Iowa gambling task. Judgment and Decision Making, 3, 42-50.
Delgado, M. R., Locke, H. M., Stenger, V. A., & Fiez, J. A. (2003). Dorsal striatum
responses to reward and punishment: Effects of valence and magnitude manipulations.
Cognitive, Affective & Behavioral Neuroscience, 3, 27-38.
Denburg, N. L., Weller, J. A., Yamada, T. H., Shivapour, D. M., Kaup, A. R., LaLoggia,
A., . . . Bechara, A. (2009). Poor decision making among older adults is related to elevated
levels of neuroticism. Annals of Behavioral Medicine, 37, 164-172.
Denes-Raj, V., & Epstein, S. (1994). Conflict between intuitive and rational processing:
When people behave against their better judgment. Journal of Personality and Social
Psychology, 66, 819-829.
Denmark, F., Russo, N. F., Frieze, I. H., & Sechzer, J. A. (1988). Guidelines for avoiding
sexism in psychological research: A report of the Ad Hoc Committee on Nonsexist
Research. American Psychologist, 43, 582-585.
Depue, R. A., & Collins, P. F. (1999). Neurobiology of the structure of personality:
Dopamine, facilitation of incentive motivation, and extraversion. Behavioral and Brain
Sciences, 22, 491-569.
Devine, P. G. (1989). Stereotypes and prejudice: Their automatic and controlled
components. Journal of Personality and Social Psychology, 56, 5-18.
Dhami, M. K. (2003). Psychological models of professional decision making.
Psychological Science, 14, 175-180.
References
173
Diehl, B., Dinner, D. S., Mohamed, A., Najm, I., Klem, G., LaPresto, E., . . . Luders, H. O.
(2000). Evidence of cingulate motor representation in humans. Neurology, 55, 725-728.
Dijksterhuis, A. (2004). Think different: The merits of unconscious thought in preference
development and decision making. Journal of Personality and Social Psychology, 87, 586598.
Dijksterhuis, A., & Nordgren, L. F. (2006). A theory of unconscious thought. Perspectives
on Psychological Science, 1, 95-109.
Donahoe, J. W., & Dorsel, V. P. (1997). Neural-network models of cognition:
Biobehavioral foundations. Amsterdam, The Netherlands: North-Holland/Elsevier Science
Publishers.
Donchin, E., & Coles, M. G. (1988). Is the P300 component a manifestation of context
updating? Behavioral and Brain Sciences, 11, 357-427.
Donkers, F. C. L., & van Boxtel, G. J. M. (2004). The N2 in go/no-go tasks reflects
conflict monitoring not response inhibition. Brain and Cognition, 56, 165-176.
Dougherty, M. R. P., Gettys, C. F., & Ogden, E. E. (1999). MINERVA-DM: A memory
processes model for judgments of likelihood. Psychological Review, 106, 180-209.
DuCharme, W. M. (1970). Response bias explanation of conservative human inference.
Journal of Experimental Psychology, 85, 66-74.
Dum, R. P., & Strick, P. L. (1993). Cingulate motor areas. In B. A. Vogt & M. Gabriel
(Eds.), Neurobiology of cingulate cortex and limbic thalamus: A comprehensive handbook.
(pp. 415-441). Cambridge, MA, US: Birkhauser.
Dum, R. P., & Strick, P. L. (1996). Spinal cord terminations of the medial wall motor areas
in macaque monkeys. The Journal of Neuroscience, 16, 6513-6525.
Duncan-Johnson, C. C., & Donchin, E. (1977). On quantifying surprise: The variation of
event-related potentials with subjective probability. Psychophysiology, 14, 456-467.
Eddy, D. M. (1982). Probabilistic reasoning in clinical medicine: Problems and
opportunities. In D. Kahneman, P. Slovic & A. Tversky (Eds.), Judgment under
uncertainty: Heuristics and biases. (pp. 249-267). Cambridge, MA, US: Cambridge
University Press.
Edwards, K., & Smith, E. E. (1996). A disconfirmation bias in the evaluation of arguments.
Journal of Personality and Social Psychology, 71, 5-24.
Edwards, W. (1968). Conservatism in human information processing. In B. Kleinmuntz
(Ed.), Formal representation of human judgment. (pp. 17-52). New York, NY, US: Wiley.
Egan, M. F., Goldberg, T. E., Kolachana, B. S., Callicott, J. H., Mazzanti, C. M., Straub, R.
E., . . . Weinberger, D. R. (2001). Effect of COMT Val108/158 Met genotype on frontal
lobe function and risk for schizophrenia. Proceedings of the National Academy of Sciences
of the United States of America, 98, 6917-6922.
References
174
Eimer, M. (1993). Effects of attention and stimulus probability on ERPs in a Go/Nogo
task. Biological Psychology, 35, 123-138.
Eimer, M. (1995). Stimulus-response compatibility and automatic response activation:
Evidence from psychophysiological studies. Journal of Experimental Psychology: Human
Perception and Performance, 21, 837-854.
Eimer, M. (1998). The lateralized readiness potential as an on-line measure of central
response activation processes. Behavior Research Methods, Instruments, & Computers, 30,
146-156.
El-Gamal, M. A., & Grether, D. M. (1995). Are people Bayesian? Uncovering behavioral
strategies. Journal of the American Statistical Association, 90, 1137-1145.
Elliott, R., Newman, J. L., Longe, O. A., & Deakin, J. F. W. (2003). Differential response
patterns in the striatum and orbitofrontal cortex to financial reward in humans: A
parametric functional magnetic resonance imaging study. The Journal of Neuroscience, 23,
303-307.
Endrass, T., Klawohn, J., Schuster, F., & Kathmann, N. (2008). Overactive performance
monitoring in obsessive-compulsive disorder: ERP evidence from correct and erroneous
reactions. Neuropsychologia, 46, 1877-1887.
Engelmann, J. B., Capra, C. M., Noussair, C., & Berns, G. S. (2009). Expert financial
advice neurobiologically "offloads" financial decision-making under risk. Public Library
of Science One, 4, e4957.
Engelmann, J. B., Damaraju, E., Padmala, S., & Pessoa, L. (2009). Combined effects of
attention and motivation on visual task performance: Transient and sustained motivational
effects. Frontiers in Human Neuroscience, 3.
Enriquez-Geppert, S., Konrad, C., Pantev, C., & Huster, R. J. (2010). Conflict and
inhibition differentially affect the N200/P300 complex in a combined go/nogo and stopsignal task. NeuroImage, 51, 877-887.
Eppinger, B., Kray, J., Mock, B., & Mecklinger, A. (2008). Better or worse than expected?
Aging, learning, and the ERN. Neuropsychologia, 46, 521-539.
Epstein, S. (1994). Integration of the cognitive and the psychodynamic unconscious.
American Psychologist, 49, 709-724.
Epstein, S. (2007). Intuition from the perspective of cognitive-experiential self-theory. In
H. Plessner, C. Betsch & T. Betsch (Eds.), Intuition in judgment and decision making. (pp.
23-37). Mahwah, NJ, US: Lawrence Erlbaum Associates.
Epstein, S., & Pacini, R. (1999). Some basic issues regarding dual-process theories from
the perspective of cognitive-experiential theory. In S. Chaiken & Y. Trope (Eds.), Dualprocess theories in social psychology. (pp. 462-482). New York, NY, US: Guilford Press.
References
175
Epstein, S., Pacini, R., Denes-Raj, V., & Heier, H. (1996). Individual differences in
intuitive-experiential and analytical-rational thinking styles. Journal of Personality and
Social Psychology, 71, 390-405.
Erev, I., & Roth, A. E. (1998). Predicting how people play games: Reinforcement learning
in experimental games with unique, mixed strategy equilibria. American Economic Review,
88, 848-881.
Erev, I., Shimonowitch, D., Schurr, A., & Hertwig, R. (2008). Base rates: How to make the
intuitive mind appreciate or neglect them. In H. Plessner, C. Betsch & T. Betsch (Eds.),
Intuition in judgment and decision making. (pp. 135-148). Mahwah, NJ, US: Lawrence
Erlbaum Associates.
Erev, I., Wallsten, T. S., & Budescu, D. V. (1994). Simultaneous over- and
underconfidence: The role of error in judgment processes. Psychological Review, 101, 519527.
Ernst, M. O., & Bülthoff, H. H. (2004). Merging the senses into a robust percept. Trends in
Cognitive Sciences, 8, 162-169.
Evans, J. S. B. T. (2002). Logic and human reasoning: An assessment of the deduction
paradigm. Psychological Bulletin, 128, 978-996.
Evans, J. S. B. T. (2006). The heuristic-analytic theory of reasoning: Extension and
evaluation. Psychonomic Bulletin & Review, 13, 378-395.
Evans, J. S. B. T. (2007a). Hypothetical thinking: Dual processes in reasoning and
judgment. New York, NY, US: Psychology Press.
Evans, J. S. B. T. (2007b). On the resolution of conflict in dual process theories of
reasoning. Thinking & Reasoning, 13, 321-339.
Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social
cognition. Annual Review of Psychology, 59, 255-278.
Evans, J. S. B. T., & Over, D. E. (1996). Rationality and reasoning. Hove, England:
Psychology Press.
Fabiani, M., Gratton, G., Coles, M. G. H. (2000). Event-related brain potentials. In J. T.
Cacioppo, L. G. Tassinary & G. G. Berntson (Eds.), Handbook of psychophysiology (2nd
ed.). (pp. 53-84). New York, NY, US: Cambridge University Press.
Fagerlin, A., Zikmund-Fisher, B. J., Ubel, P. A., Jankovic, A., Derry, H. A., & Smith, D.
M. (2007). Measuring numeracy without a math test: Development of the Subjective
Numeracy Scale. Medical Decision Making, 27, 672-680.
Falkenstein, M., Hohnsbein, J., Hoormann, J., & Blanke, L. (1991). Effects of crossmodal
divided attention on late ERP components: II. Error processing in choice reaction tasks.
Electroencephalography & Clinical Neurophysiology, 78, 447-455.
References
176
Fazio, R. H. (1990). A practical guide to the use of response latency in social
psychological research. In C. Hendrick & M. S. Clark (Eds.), Research methods in
personality and social psychology. (pp. 74-97). Thousand Oaks, CA, US: Sage
Publications, Inc.
Fazio, R. H., Sanbonmatsu, D. M., Powell, M. C., & Kardes, F. R. (1986). On the
automatic activation of attitudes. Journal of Personality and Social Psychology, 50, 229238.
Fein, G., & Chang, M. (2008). Smaller feedback ERN amplitudes during the BART are
associated with a greater family history density of alcohol problems in treatment-naive
alcoholics. Drug and Alcohol Dependence, 92, 141-148.
Ferreira, M. B., Garcia-Marques, L., Sherman, S. J., & Sherman, J. W. (2006). Automatic
and controlled components of judgment and decision making. Journal of Personality and
Social Psychology, 91, 797-813.
Ferring, D., & Filipp, S.-H. (1996). Messung des Selbstwertgefühls: Befunde zu
Reliabilität, Validität und Stabilität der Rosenberg-Skala. Diagnostica, 42, 284-292.
Fessler, D. M. T., Pillsworth, E. G., & Flamson, T. J. (2004). Angry men and disgusted
women: An evolutionary approach to the influence of emotions on risk taking.
Organizational Behavior and Human Decision Processes, 95, 107-123.
Fiedler, K., & Kareev, Y. (2008). Implications and ramifications of a sample-size approach
to intuition. In H. Plessner, C. Betsch & T. Betsch (Eds.), Intuition in judgment and
decision making. (pp. 149-170). Mahwah, NJ, US: Lawrence Erlbaum Associates, Inc.,
Publishers.
Fiedler, K., Walther, E., & Nickel, S. (1999). The auto-verification of social hypotheses:
Stereotyping and the power of sample size. Journal of Personality and Social Psychology,
77, 5-18.
Finucane, M. L., Alhakami, A., Slovic, P., & Johnson, S. M. (2000). The affect heuristic in
judgments of risks and benefits. Journal of Behavioral Decision Making, 13, 11-17.
Fischhoff, B., & Beyth-Marom, R. (1983). Hypothesis evaluation from a Bayesian
perspective. Psychological Review, 90, 239-260.
Fiske, S. T., Lin, M., & Neuberg, S. L. (1999). The continuum model: Ten years later. In S.
Chaiken & Y. Trope (Eds.), Dual-process theories in social psychology. (pp. 231-254).
New York, NY, US: Guilford Press.
Fiske, S. T., & Neuberg, S. L. (1990). A continuum of impression formation, from
category-based to individuating processes: Influences of information and motivation on
attention and interpretation. In P. Z. Mark (Ed.), Advances in experimental social
psychology. (pp. 1-74). Riverport, MO, US: Academic Press.
Folstein, J. R., & Van Petten, C. (2008). Influence of cognitive control and mismatch on
the N2 component of the ERP: A review. Psychophysiology, 45, 152-170.
References
177
Fong, G. T., Krantz, D. H., & Nisbett, R. E. (1986). The effects of statistical training on
thinking about everyday problems. Cognitive Psychology, 18, 253-292.
Frank, M. J. (2005). Dynamic dopamine modulation in the basal ganglia: A
neurocomputational account of cognitive deficits in medicated and nonmedicated
parkinsonism. Journal of Cognitive Neuroscience, 17, 51-72.
Frank, M. J., & Claus, E. D. (2006). Anatomy of a decision: Striato-orbitofrontal
interactions in reinforcement learning, decision making, and reversal. Psychological
Review, 113, 300-326.
Frank, M. J., Cohen, M. X., & Sanfey, A. G. (2009). Multiple systems in decision making:
A neurocomputational perspective. Current Directions in Psychological Science, 18, 7377.
Frank, M. J., O'Reilly, R. C., & Curran, T. (2006). When memory fails, intuition reigns:
Midazolam enhances implicit inference in humans. Psychological Science, 17, 700-707.
Frank, M. J., Seeberger, L. C., & O'Reilly, R. C. (2004). By carrot or by stick: Cognitive
reinforcement learning in parkinsonism. Science, 306, 1940-1943.
Frank, M. J., Woroch, B. S., & Curran, T. (2005). Error-related negativity predicts
reinforcement learning and conflict biases. Neuron, 47, 495-501.
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic
Perspectives, 19, 25-42.
Friedman, D., Cycowicz, Y. M., & Gaeta, H. (2001). The novelty P3: An event-related
brain potential (ERP) sign of the brain‟s evaluation of novelty. Neuroscience and
Biobehavioral Reviews, 25, 355-373.
Friese, M., Hofmann, W., & Wänke, M. (2008). When impulses take over: Moderated
predictive validity of explicit and implicit attitude measures in predicting food choice and
consumption behaviour. British Journal of Social Psychology, 47, 397-419.
Friese, M., Wänke, M., & Plessner, H. (2006). Implicit consumer preferences and their
influence on product choice. Psychology & Marketing, 23, 727-740.
Fritzsche, A.-S., Stahl, J., & Gibbons, H. (2010). An ERP study of the processing of
response conflict in a dynamic localization task: The role of individual differences in taskappropriate behavior. Clinical Neurophysiology, 121, 1358-1370.
Fudenberg, D., & Levine, D. K. (1998). The theory of learning in games. Cambridge, MA,
US: MIT Press.
Fudenberg, D., & Levine, D. K. (2006). A dual-self model of impulse control. American
Economic Review, 96, 1449-1476.
Fujikawa, T., & Oda, S. H. (2005). A laboratory study of Bayesian updating in small
feedback-based decision problems. American Journal of Applied Sciences, 2, 1129-1133.
References
178
Furnham, A. F. (1984). Many sides of the coin: The psychology of money usage.
Personality and Individual Differences, 5, 501-509.
Ganguly, A. R., Kagel, J. H., & Moser, D. V. (2000). Do asset market prices reflect
traders‟ judgment biases? Journal of Risk and Uncertainty, 20, 219-245.
Gehring, W. J., Goss, B., Coles, M. G., Meyer, D. E., & Donchin, E. (1993). A neural
system for error detection and compensation. Psychological Science, 4, 385-390.
Gehring, W. J., Gratton, G., Coles, M. G., & Donchin, E. (1992). Probability effects on
stimulus evaluation and response processes. Journal of Experimental Psychology: Human
Perception and Performance, 18, 198-216.
Gehring, W. J., Himle, J., & Nisenson, L. G. (2000). Action-monitoring dysfunction in
obsessive-compulsive disorder. Psychological Science, 11, 1-6.
Gehring, W. J., & Willoughby, A. R. (2002). The medial frontal cortex and the rapid
processing of monetary gains and losses. Science, 295, 2279-2282.
Gehring, W. J., & Willoughby, A. R. (2004). Are all medial frontal negativities created
equal? Toward a richer empirical basis for theories of action monitoring. In M. Ullsperger
& M. Falkenstein (Eds.), Errors, conflicts, and the brain: Current opinions on
performance monitoring. (pp. 14-21). Leipzig, Germany: MPI of Human Cognitive and
Brain Sciences.
Gerlitz, J.-Y., & Schupp, J. (2005). Zur Erhebung der Big-Five-basierten
Persönlichkeitsmerkmale im SOEP. DIW Research Notes 2005-4.
Gianotti, L. R. R., Knoch, D., Faber, P. L., Lehmann, D., Pascual-Marqui, R. D., Diezi, C.,
. . . Fehr, E. (2009). Tonic activity level in the right prefrontal cortex predicts individual‟s
risk taking. Psychological Science, 20, 33-38.
Gibson, B. (2008). Can evaluative conditioning change attitudes toward mature brands?
New evidence from the Implicit Association Test. Journal of Consumer Research, 35, 178188.
Gigerenzer, G. (2007). Gut feelings: The intelligence of the unconscious. New York, NY,
US: Viking Press.
Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of
bounded rationality. Psychological Review, 103, 650-669.
Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without
instruction: Frequency formats. Psychological Review, 102, 684-704.
Gigerenzer, G., Todd, P. M., & the ABC Research Group (1999). Simple heuristics that
make us smart. New York, NY, US: Oxford University Press.
Gilbert, D. T. (2002). Inferential correction. In T. Gilovich, D. Griffin & D. Kahneman
(Eds.), Heuristics and biases: The psychology of intuitive judgment. (pp. 167-184). New
York, NY, US: Cambridge University Press.
References
179
Gilligan, C. (1982). In a different voice: Psychological theory and women’s development.
Cambridge, MA, US: Harvard University Press.
Girotto, V., & Gonzalez, M. (2001). Solving probabilistic and statistical problems: A
matter of information structure and question form. Cognition, 78, 247-276.
Glöckner, A. (2008). Does intuition beat fast and frugal heuristics? A systematic empirical
analysis. In H. Plessner, C. Betsch & T. Betsch (Eds.), Intuition in judgment and decision
making. (pp. 309-325). Mahwah, NJ, US: Lawrence Erlbaum Associates.
Glöckner, A., & Betsch, T. (2008a). Do people make decisions under risk based on
ignorance? An empirical test of the priority heuristic against cumulative prospect theory.
Organizational Behavior and Human Decision Processes, 107, 75-95.
Glöckner, A., & Betsch, T. (2008b). Modeling option and strategy choices with
connectionist networks: Towards an integrative model of automatic and deliberate decision
making. Judgment and Decision Making, 3, 215-228.
Glöckner, A., & Betsch, T. (2008c). Multiple-reason decision making based on automatic
processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34,
1055-1075.
Glöckner, A., & Dickert, S. (2008). Base-rate respect by intuition: Approximating rational
choices in base-rate tasks with multiple cues. Working Paper Series of the Max Planck
Institute for Research on Collective Goods: Max Planck Institute for Research on
Collective Goods.
Glöckner, A., & Herbold, A.-K. (2008). Information processing in decisions under risk:
Evidence for compensatory strategies based on automatic processes. MPI Collective Goods
Preprint, No. 42.
Glöckner, A., & Witteman, C. (2010a). Beyond dual-process models: A categorisation of
processes underlying intuitive judgement and decision making. Thinking & Reasoning, 16,
1-25.
Glöckner, A., & Witteman, C. (2010b). Foundations for tracing intuition: Challenges and
methods. New York, NY, US: Psychology Press.
Gollwitzer, P. M. (1993). Goal achievement: The role of intentions. In W. Stroebe & M.
M. Hewstone (Eds.), European review of social psychology, Vol. 4. (pp. 141-185).
Chichester, England: Wiley.
Gollwitzer, P. M. (1999). Implementation intentions: Strong effects of simple plans.
American Psychologist, 54, 493-503.
Gollwitzer, P. M. (2006). Open questions in implementation intention research. Social
Psychological Review, 8, 14-18.
Gollwitzer, P. M., & Bargh, J. A. (2005). Automaticity in goal pursuit. In A. J. Elliot & C.
S. Dweck (Eds.), Handbook of competence and motivation. (pp. 624-646). New York, NY,
US: Guilford Publications.
References
180
Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and goal achievement:
A meta-analysis of effects and processes. In M. P. Zanna (Ed.), Advances in experimental
social psychology, Vol 38. (pp. 69-119). San Diego, CA, US: Elsevier Academic Press.
Goodie, A. S., & Crooks, C. L. (2004). Time-pressure effects on performance in a baserate task. Journal of General Psychology, 131, 18-28.
Goyer, J. P., Woldorff, M. G., & Huettel, S. A. (2008). Rapid electrophysiological brain
responses are influenced by both valence and magnitude of monetary rewards. Journal of
Cognitive Neuroscience, 20, 2058-2069.
Gratton, G., Bosco, C. M., Kramer, A. F., Coles, M. G., Wickens, C. D., & Donchin, E.
(1990). Event-related brain potentials as indices of information extraction and response
priming. Electroencephalography & Clinical Neurophysiology, 75, 419-432.
Gratton, G., Coles, M. G. H., Sirevaag, E. J., Eriksen, C. W., & Donchin, E. (1988). Preand poststimulus activation of response channels: A psychophysiological analysis. Journal
of Experimental Psychology: Human Perception and Performance, 14, 331-344.
Gray, J. A. (1982). The neuropsychology of anxiety: An enquiry into the functions of the
septo-hippocampal system. New York, NY, US: Oxford University Press.
Grether, D. M. (1980). Bayes rule as a descriptive model: The representativeness heuristic.
Quarterly Journal of Economics, 95, 537-557.
Grether, D. M. (1992). Testing Bayes rule and the representativeness heuristic: Some
experimental evidence. Journal of Economic Behavior and Organization, 17, 31-57.
Griffin, D., & Tversky, A. (1992). The weighing of evidence and the determinants of
confidence. Cognitive Psychology, 24, 411-435.
Griffiths, T. L., & Tenenbaum, J. B. (2006). Optimal predictions in everyday cognition.
Psychological Science, 17, 767-773.
Hackley, S. A., & Miller, J. (1995). Response complexity and precue interval effects on the
lateralized readiness potential. Psychophysiology, 32, 230-241.
Haggard, P., & Eimer, M. (1999). On the relation between brain potentials and the
awareness of voluntary movements. Experimental Brain Research, 126, 128-133.
Hajcak, G., & Foti, D. (2008). Errors are aversive: Defensive motivation and the errorrelated negativity. Psychological Science, 19, 103-108.
Hajcak, G., Holroyd, C. B., Moser, J. S., & Simons, R. F. (2005). Brain potentials
associated with expected and unexpected good and bad outcomes. Psychophysiology, 42,
161-170.
Hajcak, G., Moser, J. S., Holroyd, C. B., & Simons, R. F. (2006). The feedback-related
negativity reflects the binary evaluation of good versus bad outcomes. Biological
Psychology, 71, 148-154.
References
181
Hajcak, G., Moser, J. S., Holroyd, C. B., & Simons, R. F. (2007). It‟s worse than you
thought: the feedback negativity and violations of reward prediction in gambling tasks.
Psychophysiology, 44, 905-912.
Hajcak, G., Moser, J. S., Yeung, N., & Simons, R. F. (2005). On the ERN and the
significance of errors. Psychophysiology, 42, 151-160.
Hammerton, M. (1973). A case of radical probability estimation. Journal of Experimental
Psychology, 101, 252-254.
Hammond, K. R., Hamm, R. M., Grassia, J., & Pearson, T. (1987). Direct comparison of
the efficacy of intuitive and expert cognition in expert judgment. IEEE Transaction on
Systems, Man, and Cybernetics, 5, 753-770.
Harlé, K. M., & Sanfey, A. G. (2007). Incidental sadness biases social economic decisions
in the Ultimatum Game. Emotion, 7, 876-881.
Harrison, G. W. (1994). Expected utility theory and the experimentalists. Empirical
Economics, 19, 223-253.
Heil, M., Osman, A., Wiegelmann, J., Rolke, B., & Hennighausen, E. (2000). N200 in the
Eriksen-task: Inhibitory executive process? Journal of Psychophysiology, 14, 218-225.
Heldmann, M., Rüsseler, J., & Münte, T. F. (2008). Internal and external information in
error processing. BMC Neuroscience, 9, 33.
Hewig, J., Trippe, R., Hecht, H., Coles, M. G. H., Holroyd, C. B., & Miltner, W. H. R.
(2007). Decision-making in blackjack: An electrophysiological analysis. Cerebral Cortex,
17, 865-877.
Higgins, E. T. (1997). Beyond pleasure and pain. American Psychologist, 52, 1280-1300.
Hilbig, B. E. (2008). Individual differences in fast-and-frugal decision making:
Neuroticism and the recognition heuristic. Journal of Research in Personality, 42, 16411645.
Hintzman, D. L. (1988). Judgments of frequency and recognition memory in a multipletrace memory model. Psychological Review, 95, 528-551.
Hirsh, J. B., & Inzlicht, M. (2010). Error-related negativity predicts academic performance.
Psychophysiology, 47, 192-196.
Hirsh, J. B., Morisano, D., & Peterson, J. B. (2008). Delay discounting: Interactions
between personality and cognitive ability. Journal of Research in Personality, 42, 16461650.
Hirshleifer, D. (2001). Investor psychology and asset pricing. The Journal of Finance, 56,
1533-1597.
Hodes, R. L., Cook, E. W., & Lang, P. J. (1985). Individual differences in autonomic
response: Conditioned association or conditioned fear? Psychophysiology, 22, 545-560.
References
182
Hofmann, W., Rauch, W., & Gawronski, B. (2007). And deplete us not into temptation:
Automatic attitudes, dietary restraint, and self-regulatory resources as determinants of
eating behavior. Journal of Experimental Social Psychology, 43, 497-504.
Hogarth, R. M. (2001). Educating intuition. Chicago, IL, US: University of Chicago Press.
Hogarth, R. M. (2005). Deciding analytically or trusting your intuition? The advantages
and disadvantages of analytic and intuitive thought. In T. Betsch & S. Haberstroh (Eds.),
The routines of decision making. (pp. 67-82). Mahwah, NJ, US: Lawrence Erlbaum
Associates.
Holroyd, C. B., & Coles, M. G. H. (2002). The neural basis of human error processing:
Reinforcement learning, dopamine, and the error-related negativity. Psychological Review,
109, 679-709.
Holroyd, C. B., & Coles, M. G. H. (2008). Dorsal anterior cingulate cortex integrates
reinforcement history to guide voluntary behavior. Cortex, 44, 548-559.
Holroyd, C. B., Coles, M. G. H., & Nieuwenhuis, S. (2002). Medial prefrontal cortex and
error potentials. Science, 296, 1610-1611.
Holroyd, C. B., Hajcak, G., & Larsen, J. T. (2006). The good, the bad and the neutral:
Electrophysiological responses to feedback stimuli. Brain Research, 1105, 93-101.
Holroyd, C. B., & Krigolson, O. E. (2007). Reward prediction error signals associated with
a modified time estimation task. Psychophysiology, 44, 913-917.
Holroyd, C. B., Krigolson, O. E., Baker, R., Lee, S., & Gibson, J. (2009). When is an error
not a prediction error? An electrophysiological investigation. Cognitive, Affective &
Behavioral Neuroscience, 9, 59-70.
Holroyd, C. B., Larsen, J. T., & Cohen, J. D. (2004). Context dependence of the eventrelated brain potential associated with reward and punishment. Psychophysiology, 41, 245253.
Holroyd, C. B., Nieuwenhuis, S., Yeung, N., & Cohen, J. D. (2003). Errors in reward
prediction are reflected in the event-related brain potential. NeuroReport, 14, 2481-2484.
Holroyd, C. B., Yeung, N., Coles, M. G. H., & Cohen, J. D. (2005). A mechanism for error
detection in speeded response time tasks. Journal of Experimental Psychology: General,
134, 163-191.
Holyoak, K. J., & Simon, D. (1999). Bidirectional reasoning in decision making by
constraint satisfaction. Journal of Experimental Psychology: General, 128, 3-31.
Horstmann, N., Ahlgrimm, A., & Glöckner, A. (2009). How distinct are intuition and
deliberation? An eye-tracking analysis of instruction-induced decision modes. Judgment
and Decision Making, 4, 335-354.
References
183
Horstmann, N., Hausmann, D., & Ryf, S. (2010). Methods for inducing intuitive and
deliberate processing modes. In A. Glöckner & C. Witteman (Eds.), Foundations for
tracing intuition: Challenges and methods. (pp. 219-237). New York, NY, US: Psychology
Press.
Hsu, M., Bhatt, M., Adolphs, R., Tranel, D., & Camerer, C. F. (2005). Neural systems
responding to degrees of uncertainty in human decision-making. Science, 310, 1680-1683.
Huang, G. T. (2005). The economics of brains. Technology Review.
Hubert, M., & Kenning, P. (2008). A current overview of consumer neuroscience. Journal
of Consumer Behaviour, 7, 272-292.
Huettel, S. A. (2006). Behavioral, but not reward, risk modulates activation of prefrontal,
parietal, and insular cortices. Cognitive, Affective, & Behavioral Neuroscience, 6, 141-151.
Huettel, S. A., Song, A. W., & McCarthy, G. (2005). Decisions under uncertainty:
Probabilistic context influences activation of prefrontal and parietal cortices. The Journal
of Neuroscience, 25, 3304-3311.
Huettel, S. A., Stowe, C. J., Gordon, E. M., Warner, B. T., & Platt, M. L. (2006). Neural
signatures of economic preferences for risk and ambiguity. Neuron, 49, 765-775.
Ichikawa, N., Siegle, G. J., Dombrovski, A., & Ohira, H. (2010). Subjective and modelestimated reward prediction: Association with the feedback-related negativity (FRN) and
reward prediction error in a reinforcement learning task. International Journal of
Psychophysiology, 78, 273-283.
Inzlicht, M., & Ben-Zeev, T. (2000). A threatening intellectual environment: Why females
are susceptible to experiencing problem-solving deficits in the presence of males.
Psychological Science, 11, 365-371.
Jackson, S. R., Jackson, G. M., & Roberts, M. (1999). The selection and suppression of
action: ERP correlates of executive control in humans. NeuroReport, 10, 861-865.
Jacoby, L. L. (1991). A process dissociation framework: Separating automatic from
intentional uses of memory. Journal of Memory and Language, 30, 513-541.
Jenkins, G. D., Jr., Mitra, A., Gupta, N., & Shaw, J. D. (1998). Are financial incentives
related to performance? A meta-analytic review of empirical research. Journal of Applied
Psychology, 83, 777-787.
Jenkins, I. H., Brooks, D. J., Nixon, P. D., Frackowiak, R. S. J., & Passingham, R. E.
(1994). Motor sequence learning: A study with positron emission tomography. The Journal
of Neuroscience, 14, 3775-3790.
Jodo, E., & Kayama, Y. (1992). Relation of a negative ERP component to response
inhibition in a Go/No-go task. Electroencephalography & Clinical Neurophysiology, 82,
477-482.
References
184
Joelving, F. (2009, November 4). How you learn more from success than failure – The
brain may not learn from its mistakes after all. Scientific American Mind. Retrieved
January 31, 2011, from: http://www.scientificamerican.com/ article.cfm?id=why-successbreeds-success
Johnson, R., & Donchin, E. (1980). P300 and stimulus categorization: Two plus one is not
so different from one plus one. Psychophysiology, 17, 167-178.
Jones, A. D., Cho, R. Y., Nystrom, L. E., Cohen, J. D., & Braver, T. S. (2002). A
computational model of anterior cingulate function in speeded response tasks: Effects of
frequency, sequence, and conflict. Cognitive, Affective & Behavioral Neuroscience, 2, 300317.
Jones, M., & Sugden, R. (2001). Positive confirmation bias in the acquisition of
information. Theory and Decision, 50, 59-99.
Jonkman, L. M., Sniedt, F. L. F., & Kemner, C. (2007). Source localization of the NogoN2: A developmental study. Clinical Neurophysiology, 118, 1069-1077.
Jung, T.-P., Makeig, S., Westerfield, M., Townsend, J., Courchesne, E., & Sejnowski, T. J.
(2001). Analysis and visualizations of single-trial event related potentials. Human Brain
Mapping, 14, 166-185.
Juslin, P., & Persson, M. (2002). PROBabilities from EXemplars (PROBEX): A 'lazy'
algorithm for probabilistic inference from generic knowledge. Cognitive Science: A
Multidisciplinary Journal, 26, 563-607.
Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics.
American Economic Review, 93, 1449-1475.
Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution
in intuitive judgment. In T. Gilovich, D. Griffin & D. Kahneman (Eds.), Heuristics and
biases: The psychology of intuitive judgment. (pp. 49-81). New York, NY, US: Cambridge
University Press.
Kahneman, D., & Frederick, S. (2005). A model of heuristic judgment. In K. J. Holyoak &
R. G. Morrison (Eds.), The Cambridge handbook of thinking and reasoning. (pp. 267-293).
New York, NY, US: Cambridge University Press.
Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics
and biases. New York, NY, US: Cambridge University Press.
Kahneman, D., & Treisman, A. (1984). Changing views of attention and automaticity. In
R. Parasuraman & D. R. Davies (Eds.), Varieties of attention. (pp. 29-61). New York,
NY, US: Academic Press.
Kahneman, D., & Tversky, A. (1972). Subjective probability: A judgment of
representativeness. Cognitive Psychology, 3, 430-454.
Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological
Review, 80, 237-251.
References
185
Kamarajan, C., Porjesz, B., Rangaswamy, M., Tang, Y., Chorlian, D. B.,
Padmanabhapillai, A., . . . Begleiter, H. (2009). Brain signatures of monetary loss and gain:
Outcome-related potentials in a single outcome gambling task. Behavioural Brain
Research, 197, 62-76.
Kane, M. J., & Engle, R. W. (2002). The role of prefrontal cortex in working-memory
capacity, executive attention, and general fluid intelligence: An individual-differences
perspective. Psychonomic Bulletin & Review, 9, 637-671.
Keller, I., & Heckhausen, H. (1990). Readiness potentials preceding spontaneous motor
acts: Voluntary vs. involuntary control. Electroencephalography & Clinical
Neurophysiology, 76, 351-361.
Keller, J., Bohner, G., & Erb, H.-P. (2000). Intuitive und heuristische Verarbeitung?
Verschiedene Prozesse? Präsentation einer deutschen Fassung des "Rational-Experiential
Inventory" sowie neuer Selbstberichtskalen zur Heuristiknutzung. Zeitschrift für
Sozialpsychologie, 31, 87-101.
Kerns, J. G., Cohen, J. D., MacDonald, A. W., III, Cho, R. Y., Stenger, V. A., & Carter, C.
S. (2004). Anterior cingulate conflict monitoring predicts adjustments in control. Science,
303, 1023-1026.
Ketelaar, T., & Au, W. T. (2003). The effects of feelings of guilt on the behaviour of
uncooperative individuals in repeated social bargaining games: An affect-as-information
interpretation of the role of emotion in social interaction. Cognition and Emotion, 17, 429453.
Klaczynski, P. A. (2000). Motivated scientific reasoning biases, epistemological beliefs,
and theory polarization: A two-process approach to adolescent cognition. Child
Development, 71, 1347-1366.
Klayman, J. (1995). Varieties of confirmation bias. Psychology of Learning and
Motivation, 42, 385-418.
Klein, G. A. (1993). A recognition-primed decision (RPD) model of rapid decision
making. In G. A. Klein, J. Orasanu, R. Calderwood & C. E. Zsambok (Eds.), Decision
making in action: Models and methods. (pp. 138-147). Westport, CT, US: Ablex
Publishing.
Klein, G. A. (2003). Intuition at work. New York, NY, US: Doubleday.
Knill, D. C., & Richards, W. A. (1996). Perception as Bayesian inference. Cambridge,
MA, US: Cambridge University Press.
Knoch, D., Gianotti, L. R. R., Baumgartner, T., & Fehr, E. (2010). A neural marker of
costly punishment behavior. Psychological Science, 21, 337-342.
Knutson, B., Taylor, J., Kaufman, M., Peterson, R., & Glover, G. (2005). Distributed
neural representation of expected value. The Journal of Neuroscience, 25, 4806-4812.
References
186
Kok, A. (1986). Effects of degradation of visual stimuli on components of the event-related
potential (ERP) in go/nogo reaction tasks. Biological Psychology, 23, 21-38.
Kopp, B., Rist, F., & Mattler, U. (1996). N200 in the flanker task as a neurobehavioral tool
for investigating executive control. Psychophysiology, 33, 282-294.
Körding, K. P., Tenenbaum, J. B., & Shadmehr, R. (2007). The dynamics of memory as a
consequence of optimal adaptation to a changing body. Nature Neuroscience, 10, 779-786.
Körding, K. P., & Wolpert, D. M. (2006). Bayesian decision theory in sensorimotor
control. Trends in Cognitive Sciences, 10, 319-326.
Kornblum, S., Hasbroucq, T., & Osman, A. (1990). Dimensional overlap: Cognitive basis
for stimulus-response compatibility – A model and taxonomy. Psychological Review, 97,
253-270.
Kosonen, P., & Winne, P. H. (1995). Effects of teaching statistical laws on reasoning about
everyday problems. Journal of Educational Psychology, 87, 33-46.
Krakauer, J. W., Mazzoni, P., Ghazizadeh, A., Ravindran, R., & Shadmehr, R. (2006).
Generalization of motor learning depends on the history of prior action. Public Library of
Science Biology, 4, e316.
Krawczyk, D. C. (2002). Contributions of the prefrontal cortex to the neural basis of
human decision making. Neuroscience and Biobehavioral Reviews, 26, 631-664.
Krugel, L. K., Biele, G., Mohr, P. N. C., Li, S.-C., & Heekeren, H. R. (2009). Genetic
variation in dopaminergic neuromodulation influences the ability to rapidly and flexibly
adapt decisions. PNAS Proceedings of the National Academy of Sciences of the United
States of America, 106, 17951-17956.
Langan-Fox, J., & Shirley, D. A. (2003). The nature and measurement of intuition:
Cognitive and behavioral interests, personality, and experiences. Creativity Research
Journal, 15, 207-222.
Leboeuf, R. A. (2002). Alternating selves and conflicting choices: Identity salience and
preference inconsistency. Dissertation Abstracts International: Section B: The Sciences
and Engineering, 63, 1088.
Lee, D., McGreevy, B. P., & Barraclough, D. J. (2005). Learning and decision making in
monkeys during a rock-paper-scissors game. Cognitive Brain Research, 25, 416-430.
Lehman, D. R., Lempert, R. O., & Nisbett, R. E. (1988). The effects of graduate training
on reasoning: Formal discipline and thinking about everyday-life events. American
Psychologist, 43, 431-442.
Lerner, J. S., & Keltner, D. (2001). Fear, anger, and risk. Journal of Personality and Social
Psychology, 81, 146-159.
Leuthold, H., & Jentzsch, I. (2001). Neural correlates of advance movement preparation: A
dipole source analysis approach. Cognitive Brain Research, 12, 207-224.
References
187
Leuthold, H., & Jentzsch, I. (2002). Distinguishing neural sources of movement
preparation and execution: An electrophysiological analysis. Biological Psychology, 60,
173-198.
Leuthold, H., Sommer, W., & Ulrich, R. (1996). Partial advance information and response
preparation: Inferences from the lateralized readiness potential. Journal of Experimental
Psychology: General, 125, 307-323.
Leuthold, H., Sommer, W., & Ulrich, R. (2004). Preparing for action: Inferences from
CNV and LRP. Journal of Psychophysiology, 18, 77-88.
Levin, I. P., Gaeth, G. J., Schreiber, J., & Lauriola, M. (2002). A new look at framing
effects: Distribution of effect sizes, individual differences, and independence of types of
effects. Organizational Behavior and Human Decision Processes, 88, 411-429.
Lewin, K. (1939). Field theory and experiment in social psychology. The American
Journal of Sociology, 44, 868-896.
Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious
intention to act in relation to onset of cerebral activity (Readiness-Potential): The
unconscious initiation of a freely voluntary act. Brain, 106, 623-642.
Lieberman, M. D. (2000). Intuition: A social cognitive neuroscience approach.
Psychological Bulletin, 126, 109-137.
Lieberman, M. D. (2003). Reflexive and reflective judgment processes: A social cognitive
neuroscience approach. In J. P. Forgas, K. D. Williams & W. von Hippel (Eds.), Social
judgments: Implicit and explicit processes. (pp. 44-67). New York, NY, US: Cambridge
University Press.
Lieberman, M. D. (2007). Social cognitive neuroscience: A review of core processes.
Annual Review of Psychology, 58, 259-289.
Lieberman, M. D., Jarcho, J. M., & Satpute, A. B. (2004). Evidence-based and intuitionbased self-knowledge: An fMRI study. Journal of Personality and Social Psychology, 87,
421-435.
Lindeman, S. T., Van den Brink, W. P., & Hoogstraten, J. (1988). Effect of feedback on
base-rate utilization. Perceptual and Motor Skills, 67, 343-350.
Lindsey, S., Hertwig, R., & Gigerenzer, G. (2003). Communicating statistical DNA
evidence. Jurimetrics, 43, 147-163.
Lipkus, I. M., Samsa, G., & Rimer, B. K. (2001). General performance on a numeracy
scale among highly educated samples. Medical Decision Making, 21, 37-44.
Liu, X., Banich, M. T., Jacobson, B. L., & Tanabe, J. L. (2006). Functional dissociation of
attentional selection within PFC: Response and non-response related aspects of attentional
selection as ascertained by fMRI. Cerebral Cortex, 16, 827-834.
References
188
Loewenstein, G., & O‟Donoghue, T. (2004). Animal spirits: Affective and deliberative
processes in economic behavior. Working Paper, Carnegie Mellon.
Loewenstein, G., Rick, S., & Cohen, J. D. (2008). Neuroeconomics. Annual Review of
Psychology, 59, 647-672.
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude
polarization: The effects of prior theories on subsequently considered evidence. Journal of
Personality and Social Psychology, 37, 2098-2109.
Lovett, M. C., & Schunn, C. D. (1999). Task representations, strategy variability, and baserate neglect. Journal of Experimental Psychology: General, 128, 107-130.
Luu, P., Tucker, D. M., Derryberry, D., Reed, M., & Poulsen, C. (2003).
Electrophysiological responses to errors and feedback in the process of action regulation.
Psychological Science, 14, 47-53.
Lyon, D., & Slovic, P. (1976). Dominance of accuracy information and neglect of base
rates in probability estimation. Acta Psychologica, 40, 286-298.
Macar, F., & Vidal, F. (2004). Event-related potentials as indices of time processing: A
review. Journal of Psychophysiology, 18, 89-104.
MacDonald, A. W., Cohen, J. D., Stenger, V. A., & Carter, C. S. (2000). Dissociating the
role of the dorsolateral prefrontal and anterior cingulate cortex in cognitive control.
Science, 288, 1835-1838.
Maner, J. K., Richey, J. A., Cromer, K., Mallott, M., Lejuez, C. W., Joiner, T. E., &
Schmidt, N. B. (2007). Dispositional anxiety and risk-avoidant decision-making.
Personality and Individual Differences, 42, 665-675.
Mann, H. B., & Whitney, D. R. (1947). On a test of whether one of two random variables
is stochastically larger than the other. Annals of Mathematical Statistics, 18, 50-60.
Marco-Pallarés, J., Cucurell, D., Cunillera, T., García, R., Andrés-Pueyo, A., Münte, T. F.,
& Rodríguez-Fornells, A. (2008). Human oscillatory activity associated to reward
processing in a gambling task. Neuropsychologia, 46, 241-248.
Marco-Pallarés, J., Cucurell, D., Cunillera, T., Krämer, U. M., Càmara, E., Nager, W., . . .
Rodríguez-Fornells, A. (2009). Genetic variability in the dopamine system (dopamine
receptor D4, catechol-o-methyltransferase) modulates neurophysiological responses to
gains and losses. Biological Psychiatry, 66, 154-161.
Mars, R. B., Coles, M. G. H., Grol, M. J., Holroyd, C. B., Nieuwenhuis, S., Hulstijn, W., &
Toni, I. (2005). Neural dynamics of error processing in medial frontal cortex. NeuroImage,
28, 1007-1013.
Martin, L. E., & Potts, G. F. (2009). Impulsivity in decision-making: An event-related
potential investigation. Personality and Individual Differences, 46, 303-308.
References
189
Masaki, H., Falkenstein, M., Stürmer, B., Pinkpank, T., & Sommer, W. (2007). Does the
error negativity reflect response conflict strength? Evidence from a Simon task.
Psychophysiology, 44, 579-585.
Masaki, H., Takeuchi, S., Gehring, W. J., Takasawa, N., & Yamazaki, K. (2006).
Affective-motivational influences on feedback-related ERPs in a gambling task. Brain
Research, 1105, 110-121.
Masaki, H., Wild-Wall, N., Sangals, J., & Sommer, W. (2004). The functional locus of the
lateralized readiness potential. Psychophysiology, 41, 220-230.
Masicampo, E. J., & Baumeister, R. F. (2008). Toward a physiology of dual-process
reasoning and judgment: Lemonade, willpower, and expensive rule-based analysis.
Psychological Science, 19, 255-260.
Massey, C., & Wu, G. (2005). Detecting regime shifts: The causes of under- and
overreaction. Management Science, 51, 932-947.
Matsumoto, M., Matsumoto, K., Abe, H., & Tanaka, K. (2007). Medial prefrontal cell
activity signaling prediction errors of action values. Nature Neuroscience, 10, 647-656.
McCabe, K., Houser, D., Ryan, L., Smith, V., & Trouard, T. (2001). A functional imaging
study of cooperation in two-person reciprocal exchange. Proceedings of the National
Academy of Sciences, 98, 11832-11835.
McClelland, J. L., & Rumelhart, D. E. (1981). An interactive activation model of context
effects in letter perception: I. An account of basic findings. Psychological Review, 88, 375407.
McClure, S. M., Botvinick, M. M., Yeung, N., Greene, J. D., & Cohen, J. D. (2007).
Conflict monitoring in cognition-emotion competition. In J. J. Gross (Ed.), Handbook of
emotion regulation. (pp. 204-226). New York, NY, US: Guilford Press.
Medin, D. L., & Bazerman, M. H. (1999). Broadening behavioral decision research:
Multiple levels of cognitive processing. Psychonomic Bulletin & Review, 6, 533-546.
Meltzoff, A. N. (2005). Imitation and other minds: The 'like me' hypothesis. In S. Hurley &
N. Chater (Eds.), Perspectives on imitation: From neuroscience to social science, Vol. 2:
Imitation, human development, and culture. (pp. 55-77). Cambridge, MA, US: MIT Press.
Meltzoff, A. N. (2007). The 'like me' framework for recognizing and becoming an
intentional agent. Acta Psychologica, 124, 26-43.
Meltzoff, A. N., & Moore, M. K. (1997). Explaining facial imitation: A theoretical model.
Early Development & Parenting, 6, 179-192.
Mennes, M., Wouters, H., van den Bergh, B., Lagae, L., & Stiers, P. (2008). ERP
correlates of complex human decision making in a gambling paradigm: Detection and
resolution of conflict. Psychophysiology, 45, 714-720.
References
190
Meyer-Lindenberg, A., Kohn, P. D., Kolachana, B., Kippenhan, S., McInerney-Leo, A.,
Nussbaum, R., . . . Berman, K. F. (2005). Midbrain dopamine and prefrontal function in
humans: Interaction and modulation by COMT genotype. Nature Neuroscience, 8, 594596.
Milham, M. P., & Banich, M. T. (2005). Anterior cingulate cortex: An fMRI analysis of
conflict specificity and functional differentiation. Human Brain Mapping, 25, 328-335.
Miller, E. K. (2000). The prefrontal cortex and cognitive control. Nature Reviews
Neuroscience, 1, 59-65.
Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function.
Annual Review of Neuroscience, 24, 167-202.
Miller, J. (1998). Effects of stimulus-response probability on choice reaction time:
Evidence from the lateralized readiness potential. Journal of Experimental Psychology:
Human Perception and Performance, 24, 1521-1534.
Miller, J., & Schröter, H. (2002). Online response preparation in a rapid serial visual search
task. Journal of Experimental Psychology: Human Perception and Performance, 28, 13641390.
Miltner, W. H. R., Braun, C. H., & Coles, M. G. H. (1997). Event-related brain potentials
following incorrect feedback in a time-estimation task: Evidence for a 'generic' neural
system for error detection. Journal of Cognitive Neuroscience, 9, 788-798.
Montague, P. R., Hyman, S. E., & Cohen, J. D. (2004). Computational roles for dopamine
in behavioural control. Nature, 431, 760-767.
Mookherjee, D., & Sopher, B. (1994). Learning behavior in an experimental matching
pennies game. Games and Economic Behavior, 7, 62-91.
Morris, S. E., Heerey, E. A., Gold, J. M., & Holroyd, C. B. (2008). Learning-related
changes in brain activity following errors and performance feedback in schizophrenia.
Schizophrenia Research, 99, 274-285.
Moutier, S., & Houdé, O. (2003). Judgement under uncertainty and conjunction fallacy
inhibition training. Thinking & Reasoning, 9, 185-201.
Mueller, S. C., Swainson, R., & Jackson, G. M. (2009). ERP indices of persisting and
current inhibitory control: A study of saccadic task switching. NeuroImage, 45, 191-197.
Müller-Gethmann, H., Rinkenauer, G., Stahl, J., & Ulrich, R. (2000). Preparation of
response force and movement direction: Onset effects on the lateralized readiness
potential. Psychophysiology, 37, 507-514.
Mussweiler, T., & Strack, F. (1999). Comparing is believing: A selective accessibility
model of judgmental anchoring. In W. Stroebe & M. Hewstone (Eds.), European review of
social psychology,Vol. 10. Chichester, England: Wiley.
References
191
Myers, I. B., & McCaulley, M. H. (1986). Manual: A guide to the development and use of
the MBTI. Palo Alto, CA, US: Consulting Psychologists Press.
Navon, D. (1978). The importance of being conservative: Some reflections on human
Bayesian behaviour. British Journal of Mathematical and Statistical Psychology, 31, 3348.
Nelissen, R. M. A., Dijker, A. J. M., & deVries, N. K. (2007). How to turn a hawk into a
dove and vice versa: Interactions between emotions and goals in a give-some dilemma
game. Journal of Experimental Social Psychology, 43, 280-286.
Newell, B. R., Weston, N. J., & Shanks, D. R. (2003). Empirical tests of a fast-and-frugal
heuristic: Not everyone 'takes-the-best'. Organizational Behavior and Human Decision
Processes, 91, 82-96.
Nickerson, R. S. (1996). Ambiguities and unstated assumptions in probabilistic reasoning.
Psychological Bulletin, 120, 410-433.
Nieuwenhuis, S., Nielen, M. M., Mol, N., Hajcak, G., & Veltman, D. J. (2005).
Performance monitoring in obsessive-compulsive disorder. Psychiatry Research, 134, 111122.
Nieuwenhuis, S., Ridderinkhof, K. R., Blom, J., Band, G. P. H., & Kok, A. (2001). Errorrelated brain potentials are differentially related to awareness of response errors: Evidence
from an antisaccade task. Psychophysiology, 38, 752-760.
Nieuwenhuis, S., Ridderinkhof, K. R., Talsma, D., Coles, M. G. H., Holroyd, C. B., Kok,
A., & Van der Molen, M. W. (2002). A computational account of altered error processing
in older age: Dopamine and the error-related negativity. Cognitive, Affective & Behavioral
Neuroscience, 2, 19-36.
Nieuwenhuis, S., Yeung, N., Holroyd, C. B., Schurger, A., & Cohen, J. D. (2004).
Sensitivity of electrophysiological activity from medial frontal cortex to utilitarian and
performance feedback. Cerebral Cortex, 14, 741-747.
Nieuwenhuis, S., Yeung, N., Van Den Wildenberg, W., & Ridderinkhof, K. R. (2003).
Electrophysiological correlates of anterior cingulate function in a go/no-go task: Effects of
response conflict and trial type frequency. Cognitive, Affective & Behavioral
Neuroscience, 3, 17-26.
O‟Doherty, J., Dayan, P., Schultz, J., Deichmann, R., Friston, K., & Dolan, R. J. (2004).
Dissociable roles of ventral and dorsal striatum in instrumental conditioning. Science, 304,
452-454.
Oliveira, F. T. P., McDonald, J. J., & Goodman, D. (2007). Performance monitoring in the
anterior cingulate is not all error related: Expectancy deviation and the representation of
action-outcome associations. Journal of Cognitive Neuroscience, 19, 1994-2004.
Onoda, K., Abe, S., & Yamaguchi, S. (2010). Feedback-related negativity is correlated
with unplanned impulsivity. NeuroReport, 21, 736-739.
References
192
Osman, A., & Moore, C. M. (1993). The locus of dual-task interference: Psychological
refractory effects on movement-related brain potentials. Journal of Experimental
Psychology: Human Perception and Performance, 19, 1292-1312.
Osman, A., Moore, C. M., & Ulrich, R. (1995). Bisecting RT with lateralized readiness
potentials: Precue effects after LRP onset. Acta Psychologica, 90, 111-127.
Ostaszewski, P. (1996). The relation between temperament and rate of temporal
discounting. European Journal of Personality, 10, 161-172.
Ouwersloot, H., Nijkamp, P., & Rietveld, P. (1998). Errors in probability updating
behaviour: Measurement and impact analysis. Journal of Economic Psychology, 19, 535563.
Pacini, R., & Epstein, S. (1999). The relation of rational and experiential information
processing styles to personality, basic beliefs, and the ratio-bias phenomenon. Journal of
Personality and Social Psychology, 76, 972-987.
Pagnoni, G., Zink, C. F., Montague, P. R., & Berns, G. S. (2002). Activity in human
ventral striatum locked to errors of reward prediction. Nature Neuroscience, 5, 97-98.
Pailing, P. E., & Segalowitz, S. J. (2004). The error-related negativity as a state and trait
measure: Motivation, personality, and ERPs in response to errors. Psychophysiology, 41,
84-95.
Pailing, P. E., Segalowitz, S. J., Dywan, J., & Davies, P. L. (2002). Error negativity and
response control. Psychophysiology, 39, 198-206.
Palmer, S. E., & Schloss, K. B. (2010). An ecological valence theory of human color
preference. PNAS Proceedings of the National Academy of Sciences of the United States of
America, 107, 8877-8882.
Parrott, W. G. (1996). Affect. In A. S. R. Manstead & M. Hewstone (Eds.), Blackwell
encyclopedia of social psychology. (pp. 15-16). Oxford, England: Blackwell Publishers
Ltd.
Paulus, M. P., Rogalsky, C., Simmons, A., Feinstein, J. S., & Stein, M. B. (2003).
Increased activation in the right insula during risk-taking decision making is related to
harm avoidance and neuroticism. NeuroImage, 19, 1439-1448.
Paus, T. (2001). Primate anterior cingulate cortex: Where motor control, drive and
cognition interface. Nature Reviews Neuroscience, 2, 417-424.
Payne, J. W., Bettman, J. R., Coupey, E., & Johnson, E. J. (1992). A constructive process
view of decision making: Multiple strategies in judgment and choice. Acta Psychologica,
80, 107-141.
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in
decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition,
14, 534-552.
References
193
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker. New
York, NY, US: Cambridge University Press.
Peterson, C. R., & Beach, L. R. (1967). Man as an intuitive statistician. Psychological
Bulletin, 68, 29-46.
Petty, R. E., & Cacioppo, J. T. (1981). Attitudes and persuasion: Classical and
contemporary approaches. Dubuque, IA, US: Brown.
Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and
peripheral routes to attitude change. New York, NY, US: Springer.
Pfefferbaum, A., Ford, J. M., Weller, B. J., & Kopell, B. S. (1985). ERPs to response
production and inhibition. Electroencephalography & Clinical Neurophysiology, 60, 423434.
Pham, M. T. (2007). Emotion and rationality: A critical review and interpretation of
empirical evidence. Review of General Psychology, 11, 155-178.
Philiastides, M. G., Biele, G., Vavatzanidis, N., Kazzer, P., & Heekeren, H. R. (2010).
Temporal dynamics of prediction error processing during reward-based decision making.
NeuroImage, 53, 221-232.
Phillips, L. D., & Edwards, W. (1966). Conservatism in a simple probability inference
task. Journal of Experimental Psychology, 72, 346-354.
Pietschmann, M., Simon, K., Endrass, T., & Kathmann, N. (2008). Changes of
performance monitoring with learning in older and younger adults. Psychophysiology, 45,
559-568.
Plessner, H. (2006). Die Klugheit der Intuition und ihre Grenzen. In A. Scherzberg (Ed.),
Kluges Entscheiden. (pp. 109-120). Tübingen, Germany: Mohr Siebeck.
Plessner, H., & Czenna, S. (2008). The benefits of intuition. In H. Plessner, C. Betsch & T.
Betsch (Eds.), Intuition in judgment and decision making. (pp. 251-265). Mahwah, NJ, US:
Lawrence Erlbaum Associates.
Pliszka, S. R., Liotti, M., & Woldorff, M. G. (2000). Inhibitory control in children with
attention-deficit/hyperactivity disorder: Event-related potentials identify the processing
component and timing of an impaired right-frontal response-inhibition mechanism.
Biological Psychiatry, 48, 238-246.
Pocheptsova, A., Amir, O., Dhar, R., & Baumeister, R. F. (2009). Deciding without
resources: Resource depletion and choice in context. Journal of Marketing Research, 46,
344-355.
Polezzi, D., Sartori, G., Rumiati, R., Vidotto, G., & Daum, I. (2010). Brain correlates of
risky decision-making. NeuroImage, 49, 1886-1894.
Polich, J. (2007). Updating P300: An integrative theory of P3a and P3b. Clinical
Neurophysiology, 118, 2128-2148.
References
194
Posner, M. I., & Snyder, C. R. R. (1975). Attention and cognitive control. In R. L. Solso
(Ed.), Information processing and cognition: The Loyola symposium. (pp. 550-585).
Hillsdale, NJ, US: Lawrence Erlbaum Associates.
Potts, G. F., George, M. R., Martin, L. E., & Barratt, E. S. (2006). Reduced punishment
sensitivity in neural systems of behavior monitoring in impulsive individuals.
Neuroscience Letters, 397, 130-134.
Pretz, J. E. (2008). Intuition versus analysis: Strategy and experience in complex everyday
problem solving. Memory & Cognition, 36, 554-566.
Preuschoff, K., Bossaerts, P., & Quartz, S. R. (2006). Neural differentiation of expected
reward and risk in human subcortical structures. Neuron, 51, 381.
Rao, R. P. N., Olshausen, B. A., & Lewicki, M. S. (2002). Probabilistic models of the
brain: Perception and neural function. Cambridge, MA, US: MIT Press.
Rapoport, A., Wallsten, T. S., Erev, I., & Cohen, B. L. (1990). Revision of opinion with
verbally and numerically expressed uncertainties. Acta Psychologica, 74, 61-79.
Read, S. J., & Miller, L. C. (1998). On the dynamic construction of meaning: An
interactive activation and competition model of social perception. In S. J. Read & L. C.
Miller (Eds.), Connectionist models of social reasoning and social behavior. (pp. 27-68).
Mahwah, NJ, US: Lawrence Erlbaum Associates.
Read, S. J., Vanman, E. J., & Miller, L. C. (1997). Connectionism, parallel constraint
satisfaction processes, and Gestalt principles: (Re)introducing cognitive dynamics to social
psychology. Personality and Social Psychology Review, 1, 26-53.
Reber, A. S. (1989). Implicit learning and tacit knowledge. Journal of Experimental
Psychology: General, 118, 219-235.
Richetin, J., Perugini, M., Adjali, I., & Hurling, R. (2007). The moderator role of intuitive
versus deliberative decision making for the predictive validity of implicit and explicit
measures. European Journal of Personality, 21, 529-546.
Ridderinkhof, K. R., Ullsperger, M., Crone, E. A., & Nieuwenhuis, S. (2004). The role of
the medial frontal cortex in cognitive control. Science, 306, 443-447.
Ridderinkhof, K. R., van den Wildenberg, W. P. M., Segalowitz, S. J., & Carter, C. S.
(2004). Neurocognitive mechanisms of cognitive control: The role of prefrontal cortex in
action selection, response inhibition, performance monitoring, and reward-based learning.
Brain and Cognition, 56, 129-140.
Rieskamp, J., & Hoffrage, U. (1999). When do people use simple heuristics and how can
we tell? In G. Gigerenzer, P. M. Todd & the ABC Research Group (Eds.), Simple
heuristics that make us smart. (pp. 141-167). New York, NY, US: Oxford University
Press.
Rieskamp, J., & Otto, P. E. (2006). SSL: A theory of how people learn to select strategies.
Journal of Experimental Psychology: General, 135, 207-236.
References
195
Rodríguez-Fornells, A., Kurzbuch, A. R., & Münte, T. F. (2002). Time course of error
detection and correction in humans: Neurophysiological evidence. The Journal of
Neuroscience, 22, 9990-9996.
Rolls, E. T., McCabe, C., & Redoute, J. (2008). Expected value, reward outcome, and
temporal difference error representations in a probabilistic decision task. Cerebral Cortex,
18, 652-663.
Rosenberg, M. (1965). Society and the adolescent self-image. Princeton, NJ, US: Princeton
University Press.
Rotge, J. Y., Clair, A. H., Jaafari, N., Hantouche, E. G., Pelissolo, A., Goillandeau, M., . . .
Aouizerate, B. (2008). A challenging task for assessment of checking behaviors in
obsessive-compulsive disorder. Acta Psychiatrica Scandinavica, 117, 465-473.
Roth, A. E., & Erev, I. (1995). Learning in extensive-form games: Experimental data and
simple dynamic models in the intermediate term. Games and Economic Behavior, 8, 164212.
Ruchsow, M., Grön, G., Reuter, K., Spitzer, M., Hermle, L., & Kiefer, M. (2005). Errorrelated brain activity in patients with obsessive-compulsive disorder and in healthy
controls. Journal of Psychophysiology, 19, 298-304.
Ruchsow, M., Grothe, J., Spitzer, M., & Kiefer, M. (2002). Human anterior cingulate
cortex is activated by negative feedback: Evidence from event-related potentials in a
guessing task. Neuroscience Letters, 325, 203-206.
Ruder, M., & Bless, H. (2003). Mood and the reliance on the ease of retrieval heuristic.
Journal of Personality and Social Psychology, 85, 20-32.
Rushworth, M. F. S., & Behrens, T. E. J. (2008). Choice, uncertainty and value in
prefrontal and cingulate cortex. Nature Neuroscience, 11, 389-397.
Sailer, U., Fischmeister, F. P. S., & Bauer, H. (2010). Effects of learning on feedbackrelated brain potentials in a decision-making task. Brain Research, 1342, 85-93.
Sanders, A. F. (1968). Choice among bets and revision of opinion. Acta Psychologica, 28,
76-83.
Sanfey, A. G. (2007). Decision neuroscience: New directions in studies of judgment and
decision making. Current Directions in Psychological Science, 16, 151-155.
Sanfey, A. G., & Chang, L. J. (2008). Multiple systems in decision making. In W. T.
Tucker, S. Ferson, A. M. Finkel & D. Slavin (Eds.), Strategies for risk communication:
Evolution, evidence, experience. (pp. 53-62). Malden, MA, US: Blackwell Publishing.
Sanfey, A. G., & Hastie, R. (2002). Interevent relationships and judgment under
uncertainty: Structure determines strategy. Memory & Cognition, 30, 921-933.
Sanfey, A. G., Loewenstein, G., McClure, S. M., & Cohen, J. D. (2006). Neuroeconomics:
Cross-currents in research on decision-making. Trends in Cognitive Sciences, 10, 108-116.
References
196
Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The
neural basis of economic decision-making in the Ultimatum Game. Science, 300, 17551758.
Santesso, D. L., Steele, K. T., Bogdan, R., Holmes, A. J., Deveney, C. M., Meites, T. M.,
& Pizzagalli, D. A. (2008). Enhanced negative feedback responses in remitted depression.
NeuroReport, 19, 1045-1048.
Sato, A., & Yasuda, A. (2004). The neural basis of error processing: The error-related
negativity associated with the feedback reflects response selection based on rewardprediction error. Japanese Journal of Physiological Psychology and Psychophysiology, 22,
19-33.
Sato, A., Yasuda, A., Ohira, H., Miyawaki, K., Nishikawa, M., Kumano, H., & Kuboki, T.
(2005). Effects of value and reward magnitude on feedback negativity and P300.
NeuroReport, 16, 407-411.
Satpute, A. B., & Lieberman, M. D. (2006). Integrating automatic and controlled processes
into neurocognitive models of social cognition. Brain Research, 1079, 86-97.
Savage, L. J. (1954). The foundations of statistics. Oxford, England: Wiley.
Scheibe, C., Schubert, R., Sommer, W., & Heekeren, H. R. (2009). Electrophysiological
evidence for the effect of prior probability on response preparation. Psychophysiology, 46,
758-770.
Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information
processing: I. Detection, search, and attention. Psychological Review, 84, 1-66.
Schönberg, T., Daw, N. D., Joel, D., & O‟Doherty, J. (2007). Reinforcement learning
signals in the human striatum distinguish learners from nonlearners during reward-based
decision making. The Journal of Neuroscience, 27, 12860-12867.
Schultz, W. (1998). Predictive reward signal of dopamine neurons. Journal of
Neurophysiology, 80, 1-27.
Schultz, W. (2002). Getting formal with dopamine and reward. Neuron, 36, 241-263.
Schultz, W. (2004). Neural coding of basic reward terms of animal learning theory, game
theory, microeconomics and behavioural ecology. Current Opinion in Neurobiology, 14,
139-147.
Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and
reward. Science, 275, 1593-1599.
Schunk, D., & Betsch, C. (2006). Explaining heterogeneity in utility functions by
individual differences in decision modes. Journal of Economic Psychology, 27, 386-401.
Schwarz, N. (2000). Emotion, cognition, and decision making. Cognition and Emotion, 14,
433-440.
References
197
Schwarz, N., & Clore, G. L. (2003). Mood as information: 20 years later. Psychological
Inquiry, 14, 296-303.
Sedlmeier, P., & Gigerenzer, G. (2001). Teaching Bayesian reasoning in less than two
hours. Journal of Experimental Psychology: General, 130, 380-400.
Sharot, T., Velasquez, C. M., & Dolan, R. J. (2010). Do decisions shape preference?:
Evidence from blind choice. Psychological Science, 21, 1231-1235.
Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information
processing: II. Perceptual learning, automatic attending and a general theory.
Psychological Review, 84, 127-190.
Shiloh, S., Salton, E., & Sharabi, D. (2002). Individual differences in rational and intuitive
thinking styles as predictors of heuristic responses and framing effects. Personality and
Individual Differences, 32, 415-429.
Shiv, B., Loewenstein, G., Bechara, A., Damasio, H., & Damasio, A. R. (2005).
Investment behavior and the negative side of emotion. Psychological Science, 16, 435-439.
Siedler, T., Schupp, J., Spiess, C. K., & Wagner, G. G. (2008). The German SocioEconomic Panel as a reference data set. SOEPpapers 150, DIW Berlin, The German SocioEconomic Panel (SOEP).
Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of
Economics, 69, 99-118.
Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. Oxford,
England: Appleton-Century.
Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological
Bulletin, 119, 3-22.
Sloman, S. A. (2002). Two systems of reasoning. In T. Gilovich, D. Griffin & D.
Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment. (pp. 379396). New York, NY, US: Cambridge University Press.
Slovic, P., Finucane, M., Peters, E., & MacGregor, D. G. (2002). The affect heuristic. In T.
Gilovich, D. Griffin & D. Kahneman (Eds.), Heuristics and biases: The psychology of
intuitive judgment. (pp. 397-420). New York, NY, US: Cambridge University Press.
Slovic, P., & Lichtenstein, S. (1971). Comparison of Bayesian and regression approaches
to the study of information processing in judgment. Organizational Behavior & Human
Performance, 6, 649-744.
Smulders, F. T. Y., & Miller, J. O. (in press). Lateralized readiness potential. In S. J. Luck
& E. S. Kappenman (Eds.), Oxford handbook of event-related potential components. New
York, NY, US: Oxford University Press.
Soon, C. S., Brass, M., Heinze, H.-J., & Haynes, J.-D. (2008). Unconscious determinants
of free decisions in the human brain. Nature Neuroscience, 11, 543-545.
References
198
Spencer, S. J., Steele, C. M., & Quinn, D. M. (1999). Stereotype threat and women‟s math
performance. Journal of Experimental Social Psychology, 35, 4-28.
Spinella, M., Yang, B., & Lester, D. (2008). Prefrontal cortex dysfunction and attitudes
toward money: A study in neuroeconomics. The Journal of Socio-Economics, 37, 17851788.
Stahl, J., & Gibbons, H. (2007). Dynamics of response-conflict monitoring and individual
differences in response control and behavioral control: An electrophysiological
investigation using a stop-signal task. Clinical Neurophysiology, 118, 581-596.
Stanovich, K. E. (1999). Who is rational? Studies of individual differences in reasoning.
Mahwah, NJ, US: Lawrence Erlbaum Associates.
Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications
for the rationality debate? Behavioral and Brain Sciences, 23, 645-726.
Starmer, C. (2000). Developments in non-expected utility theory: The hunt for a
descriptive theory of choice under risk. Journal of Economic Literature, 38, 332-382.
Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance
of African Americans. Journal of Personality and Social Psychology, 69, 797-811.
Stemmer, B., Segalowitz, S. J., Witzke, W., & Schönle, P. W. (2004). Error detection in
patients with lesions to the medial prefrontal cortex: An ERP study. Neuropsychologia, 42,
118-130.
Strack, F., & Deutsch, R. (2004). Reflective and impulsive determinants of social behavior.
Personality and Social Psychology Review, 8, 220-247.
Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of
Experimental Psychology, 18, 643-662.
Sutton, R. S., & Barto, A. G. (1981). Toward a modern theory of adaptive networks:
Expectation and prediction. Psychological Review, 88, 135-170.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge,
MA, US: MIT Press.
Swick, D., & Turken, A. U. (2002). Dissociation between conflict detection and error
monitoring in the human anterior cingulate cortex. Proceedings of the National Academy
of Sciences, 99, 16354-16359.
Takasawa, N., Takino, R., & Yamazaki, K. (1990). Event-related potentials derived from
feedback tones during motor learning. Japanese Journal of Physiological Psychology and
Psychophysiology, 8, 95-101.
Tang, T. L.-P. (1992). The meaning of money revisited. Journal of Organizational
Behavior, 13, 197-202.
References
199
Tang, T. L.-P. (2007). Income and quality of life: Does the love of money make a
difference? Journal of Business Ethics, 72, 375-393.
Tangney, J. P., Baumeister, R. F., & Boone, A. L. (2004). High self-control predicts good
adjustment, less pathology, better grades, and interpersonal success. Journal of
Personality, 72, 271-322.
Thomas, R. P., Dougherty, M. R., Sprenger, A. M., & Harbison, J. I. (2008). Diagnostic
hypothesis generation and human judgment. Psychological Review, 115, 155-185.
Thorndike, E. L. (1911). Animal intelligence. Experimental studies. Oxford, England:
Macmillan.
Tom, S. M., Fox, C. R., Trepel, C., & Poldrack, R. A. (2007). The neural basis of loss
aversion in decision-making under risk. Science, 315, 515-518.
Tomlinson-Keasey, C. (1994). My dirty little secret: Women as clandestine intellectuals. In
C. E. Franz & A. J. Stewart (Eds.), Women creating lives. (pp. 227-245). Boulder, CO, US:
Westview Press.
Toyomaki, A., & Murohashi, H. (2005). Discrepancy between feedback negativity and
subjective evaluation in gambling. NeuroReport, 16, 1865-1868.
Trepel, C., Fox, C. R., & Poldrack, R. A. (2005). Prospect theory on the brain? Toward a
cognitive neuroscience of decision under risk. Cognitive Brain Research, 23, 34-50.
Trevena, J. A., & Miller, J. (2002). Cortical movement preparation before and after a
conscious decision to move. Consciousness and Cognition: An International Journal, 11,
162-190.
Trevena, J. A., & Miller, J. (2010). Brain preparation before a voluntary action: Evidence
against unconscious movement initiation. Consciousness and Cognition, 19, 447-456.
Tunbridge, E. M., Bannerman, D. M., Sharp, T., & Harrison, P. J. (2004). Catechol-Omethyltransferase inhibition improves set-shifting performance and elevates stimulated
dopamine release in the rat prefrontal cortex. The Journal of Neuroscience, 24, 5331-5335.
Tversky, A., & Kahneman, D. (1971). Belief in the law of small numbers. Psychological
Bulletin, 76, 105-110.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases.
Science, 185, 1124-1131.
Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of
choice. Science, 211, 453-458.
Tversky, A., & Kahneman, D. (1982). Evidential impact of base rates. In D. Kahneman, P.
Slovic & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases. (pp. 153160). Cambridge, England: Cambridge University Press.
References
200
Tversky, A., & Kahneman, D. (1983). Extensional vs. intuitive reasoning: The conjunction
fallacy in probability judgment. Psychological Review, 90, 293-315.
Ullsperger, M., & von Cramon, D. Y. (2006). The role of intact frontostriatal circuits in
error processing. Journal of Cognitive Neuroscience, 18, 651-664.
van „t Wout, M., Kahn, R. S., Sanfey, A. G., & Aleman, A. (2006). Affective state and
decision-making in the Ultimatum Game. Experimental Brain Research, 169, 564-568.
van Boxtel, G. J. M., van der Molen, M. W., Jennings, J. R., & Brunia, C. H. M. (2001). A
psychophysiological analysis of inhibitory motor control in the stop-signal paradigm.
Biological Psychology, 58, 229-262.
van der Helden, J., Boksem, M. A. S., & Blom, J. H. G. (2010). The importance of failure:
Feedback-related negativity predicts motor learning efficiency. Cerebral Cortex, 20, 15961603.
van Hoesen, G. W., Morecraft, R. J., & Vogt, B. A. (1993). Connections of the monkey
cingulate cortex. In B. A. Vogt & M. Gabriel (Eds.), Neurobiology of cingulate cortex and
limbic thalamus: A comprehensive handbook. (pp. 249-284). Boston, MA, US: Birkhauser.
van Meel, C. S., Oosterlaan, J., Heslenfeld, D. J., & Sergeant, J. A. (2005). Telling good
from bad news: ADHD differentially affects processing of positive and negative feedback
during guessing. Neuropsychologia, 43, 1946-1954.
van Veen, V., & Carter, C. S. (2002). The timing of action monitoring in rostral and caudal
anterior cingulate cortex. Journal of Cognitive Neuroscience, 14, 593-602.
van Veen, V., & Carter, C. S. (2006). Conflict and cognitive control in the brain. Current
Directions in Psychological Science, 15, 237-240.
von Collani, G., & Herzberg, P. Y. (2003). Eine revidierte Fassung der deutschsprachigen
Skala zum Selbstwertgefühl von Rosenberg. Zeitschrift für Differentielle und
Diagnostische Psychologie, 24, 3-7.
von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior.
Princeton, NJ, US: Princeton University Press.
Wallsten, T. S. (1972). Conjoint-measurement framework for the study of probabilistic
information processing. Psychological Review, 79, 245-260.
Weber, E. U., & Johnson, E. J. (2009). Mindful judgment and decision making. Annual
Review of Psychology, 60, 53-85.
Weber, E. U., & Lindemann, P. G. (2008). From intuition to analysis: Making decisions
with your head, your heart, or by the book. In H. Plessner, C. Betsch & T. Betsch (Eds.),
Intuition in judgment and decision making. (pp. 191-207). Mahwah, NJ, US: Lawrence
Erlbaum Associates.
References
201
Wegener, D. T., & Petty, R. E. (1997). The flexible correction model: The role of naive
theories of bias in bias correction. In M. P. Zanna (Ed.), Advances in experimental social
psychology, Vol. 29. (pp. 141-208). San Diego, CA, US: Academic Press.
Wegner, D. M. (1994). Ironic processes of mental control. Psychological Review, 101, 3452.
Wertheimer, M. (1938a). Gestalt theory. In W. D. Ellis (Ed.), A source book of Gestalt
psychology. (pp. 1-11). London, England: Kegan Paul, Trench, Trubner & Company.
Wertheimer, M. (1938b). Laws of organization in perceptual forms. In W. D. Ellis (Ed.), A
source book of Gestalt psychology. (pp. 71-88). London, England: Kegan Paul, Trench,
Trubner & Company.
Westfall, J. E. (2010). Exploring common antecedents of three related decision biases.
(Doctoral dissertation, University of Toledo). Available from ProQuest Dissertations &
Theses database.
Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics Bulletin, 1,
80-83.
Wilson, T. D. (2002). Strangers to ourselves: Discovering the adaptive unconscious.
Cambridge, MA, US: Belknap Press/Harvard University Press.
Wilson, T. D., & Brekke, N. (1994). Mental contamination and mental correction:
Unwanted influences on judgments and evaluations. Psychological Bulletin, 116, 117-142.
Wilson, T. D., Lindsey, S., & Schooler, T. Y. (2000). A model of dual attitudes.
Psychological Review, 107, 101-126.
Wilson, T. D., & Schooler, J. W. (1991). Thinking too much: Introspection can reduce the
quality of preferences and decisions. Journal of Personality and Social Psychology, 60,
181-192.
Winkler, R. L., & Murphy, A. H. (1973). Experiments in the laboratory and the real world.
Organizational Behavior & Human Performance, 10, 252-270.
Woolhouse, L. S., & Bayne, R. (2000). Personality and the use of intuition: Individual
differences in strategy and performance on an implicit learning task. European Journal of
Personality, 14, 157-169.
Wu, Y., & Zhou, X. (2009). The P300 and reward valence, magnitude, and expectancy in
outcome evaluation. Brain Research, 1286, 114-122.
Yamauchi, K. T., & Templer, D. I. (1982). The development of a money attitude scale.
Journal of Personality Assessment, 46, 522-528.
Yasuda, A., Sato, A., Miyawaki, K., Kumano, H., & Kuboki, T. (2004). Error-related
negativity reflects detection of negative reward prediction error. NeuroReport, 15, 25612565.
References
202
Yeung, N., Botvinick, M. M., & Cohen, J. D. (2004). The neural basis of error detection:
Conflict monitoring and the error-related negativity. Psychological Review, 111, 931-959.
Yeung, N., & Cohen, J. D. (2006). The impact of cognitive deficits on conflict monitoring:
Predictable dissociations between the error-related negativity and N2. Psychological
Science, 17, 164-171.
Yeung, N., Holroyd, C. B., & Cohen, J. D. (2005). ERP correlates of feedback and reward
processing in the presence and absence of response choice. Cerebral Cortex, 15, 535-544.
Yeung, N., & Sanfey, A. G. (2004). Independent coding of reward magnitude and valence
in the human brain. The Journal of Neuroscience, 24, 6258-6264.
Yu, R., & Zhou, X. (2009). To bet or not to bet? The error negativity or error-related
negativity associated with risk-taking choices. Journal of Cognitive Neuroscience, 21, 684696.
Zajonc, R. B. (1998). Emotions. In D. T. Gilbert, S. T. Fiske & G. Lindzey (Eds.), The
handbook of social psychology (4th ed.). (pp. 591-632). New York, NY, US: McGrawHill.
Zizzo, D. J., Stolarz-Fantino, S., Wen, J., & Fantino, E. (2000). A violation of the
monotonicity axiom: Experimental evidence on the conjunction fallacy. Journal of
Economic Behavior and Organization, 41, 263-276.
Zuckerman, M., & Lubin, B. (1965). Multiple Affect Adjective Checklist: Today form. San
Diego, CA, US: Educational and Industrial Testing Service.
Appendix A
5
203
Appendix
Appendix A: List of Tables
Table 1.
Amount won and duration of the computer task depending on magnitude
of incentives (Study 1a)
52
Table 2.
Evaluation of stimuli depending on counterbalance condition (Study 1a)
53
Table 3.
Error frequencies depending on magnitude of incentives (Study 1a)
54
Table 4.
Error frequencies depending on trial number (Study 1a)
55
Table 5.
Mean peak amplitude (grand-average) of the FRN as a function of
magnitude of incentives, rate of reinforcement mistakes, and valence of
first-draw feedback (Study 1a)
63
Table 6.
Random-effects probit regressions on second-draw errors (0=correct
choice, 1=error) (Study 1a)
65
Table 7.
Amount won and duration of the computer task depending on magnitude
of incentives (Study 1b)
84
Table 8.
Evaluation of stimuli (Study 1b)
85
Table 9.
Error frequencies depending on magnitude of incentives (Study 1b)
85
Table 10. Error frequencies depending on trial number (Study 1b)
86
Table 11. Mean peak amplitude (grand-average) of the FRN as a function of
magnitude of incentives, rate of reinforcement mistakes, and valence of
second-draw feedback (Study 1b)
94
Table 12. Random-effects probit regressions on second-draw errors (0=correct
choice, 1=error) (Study 1b)
95
Table 13. Posterior odds for the left urn depending on prior probability and number 118
of 1-balls (Study 2)
Table 14. Posterior probabilities of the left urn depending on prior probability and
number of 1-balls (Study 2)
118
Table 15. Corrected odds as an inverse measure of difficulty depending on prior
probability and number of 1-balls (Study 2)
119
Table 16. Classification of decision situations depending on prior probability and
number of 1-balls (Study 2)
120
Table 17. Alternative classification of decision situations depending on prior
probability and number of 1-balls (Study 2)
120
Appendix A
204
Table 18. Amount won and duration of the computer task (Study 2)
124
Table 19. Error frequencies (Study 2)
124
Table 20. Error frequencies (Study 2)
125
Table 21. Median response times (Study 2)
126
Table 22. Median response times (Study 2)
126
Table 23. Error frequencies after fast and slow decisions (Study 2)
127
Table 24. Error frequencies after fast and slow decisions (Study 2)
127
Table 25. Random-effects probit regressions on decision errors (0=correct choice,
1=error) (Study 2)
134
Table 26. Random-effects linear regressions on decision response times
(logarithmed) (Study 2)
136
Appendix B
205
Appendix B: List of Figures
Figure 1.
Distribution of balls in the urns, by treatment condition and state, in
the study of Charness and Levin (2005)
22
Figure 2.
Stimulus pictures on the computer screen: (A) Two urns with
concealed balls; in-between the urns fixation cross; below the urns
feedback on the first and second draw. In this example, a blue ball was
drawn from the left urn and a green ball was drawn from the right urn
(in this order or in reversed order). (B) In case of forced first draws
(see below), the urn which could not be chosen was greyed out (Study
1a)
42
Figure 3.
Distribution of balls in the two urns if one is rewarded for drawing
blue balls (Study 1a)
44
Figure 4.
Stimulus picture on the computer screen if one is rewarded for drawing 44
blue balls: Overview of the distribution of balls in the two urns (Study
1a)
Figure 5.
Schematic representation of the temporal sequence of events during
one trial of the computer task (Study 1a)
45
Figure 6.
Overview of types of errors (Study 1a)
49
Figure 7.
Error frequencies depending on trial number (Study 1a)
56
Figure 8.
Individual heterogeneity regarding error rates (Study 1a)
56
Figure 9.
(A) Grand-average FRN as a function of valence of first-draw
feedback. (B) Topographical mapping of the FRN as a function of
valence of first-draw feedback (Study 1a)
57
Figure 10.
(A) Grand-average FRN as a function of magnitude of incentives and
59
valence of first-draw feedback. (B) Topographical mapping of the
FRN as a function of magnitude of incentives and valence of first-draw
feedback (Study 1a)
Figure 11.
(A) Grand-average FRN as a function of rate of reinforcement
mistakes and valence of first-draw feedback. (B) Topographical
mapping of the FRN as a function of rate of reinforcement mistakes
and valence of first-draw feedback (Study 1a)
60
Figure 12.
(A) Grand-average FRN in the low-incentive condition as a function of
rate of reinforcement mistakes and valence of first-draw feedback. (B)
Topographical mapping of the FRN in the low-incentive condition as a
function of rate of reinforcement mistakes and valence of first-draw
feedback. (Study 1a)
61
Appendix B
206
Figure 13.
(A) Grand-average FRN in the high-incentive condition as a function
of rate of reinforcement mistakes and valence of first-draw feedback.
(B) Topographical mapping of the FRN in the high-incentive condition
as a function of rate of reinforcement mistakes and valence of firstdraw feedback (Study 1a)
62
Figure 14.
Distribution of balls in the two urns (Study 1b)
82
Figure 15.
Error frequencies depending on trial number (Study 1b)
87
Figure 16.
Individual heterogeneity regarding error rates (Study 1b)
87
Figure 17.
Grand-average ERP in response to first-draw feedback (Study 1b)
88
Figure 18.
(A) Grand-average FRN as a function of valence of second-draw
feedback. (B) Topographical mapping of the FRN as a function of
valence of second-draw feedback (Study 1b)
89
Figure 19.
(A) Grand-average FRN as a function of magnitude of incentives and
valence of second-draw feedback. (B) Topographical mapping of the
FRN as a function of magnitude of incentives and valence of seconddraw feedback (Study 1b)
90
Figure 20.
(A) Grand-average FRN as a function of rate of reinforcement
mistakes and valence of second-draw feedback. (B) Topographical
mapping of the FRN as a function of rate of reinforcement mistakes
and valence of second-draw feedback (Study 1b)
91
Figure 21.
(A) Grand-average FRN in the low-incentive condition as a function of
rate of reinforcement mistakes and valence of second-draw feedback.
(B) Topographical mapping of the FRN in the low-incentive condition
as a function of rate of reinforcement mistakes and valence of seconddraw feedback. (Study 1b)
92
Figure 22.
(A) Grand-average FRN in the high-incentive condition as a function
93
of rate of reinforcement mistakes and valence of second-draw
feedback. (B) Topographical mapping of the FRN in the high-incentive
condition as a function of rate of reinforcement mistakes and valence
of second-draw feedback (Study 1b)
Figure 23.
Stimulus picture on the computer screen if the majority of balls is blue: 112
Two urns with blue and green balls; besides the urns their assigned
number(s); in-between the urns fixation square (Study 2)
Figure 24.
Distribution of balls in the two urns depending on counterbalance
condition (Study 2)
114
Appendix B
207
Figure 25.
Stimulus picture on the computer screen if the majority of balls is blue: 115
Here, the left urn is filled with three blue balls and one green ball and
is selected if the computer randomly draws the number 1 (prior
probability of 25 %). The right urn is filled with two blue and two
green balls and is selected if the computer randomly draws the number
2, 3 or 4 (prior probability of 75 %) (Study 2)
Figure 26.
Stimulus picture on the computer screen: Exemplary sample of drawn
balls (Study 2)
115
Figure 27.
Schematic representation of the temporal sequence of events during
one trial of the computer task (Study 2)
116
Figure 28.
(A) Grand-average stimulus-locked LRP as a function of prior
probability. Time zero corresponds to the presentation of sample
evidence. Time -3000 corresponds to the presentation of prior
probabilities. Positive and negative LRP values indicate hand-specific
activation of the left and right response, respectively. (B) Grandaverage stimulus-locked LRP difference waveforms as a function of
subtraction method. Time zero corresponds to the presentation of
sample evidence. Time -3000 corresponds to the presentation of prior
probabilities (Study 2)
128
Figure 29.
Grand-average stimulus-locked LRP difference waveforms (prior 3/4
129
minus prior 1/4) as a function of rate of conservative errors. Time zero
corresponds to the presentation of sample evidence. Time -3000
corresponds to the presentation of prior probabilities.The more positive
the LRP difference value, the stronger is the hand-specific activation of
the response with the higher prior probability (Study 2)
Figure 30.
(A) Grand-average N2 as a function of decision situation. Time zero
130
corresponds to the presentation of sample evidence. (B) Grand-average
N2 difference waveform (neutral minus conflict). Time zero
corresponds to the presentation of sample evidence. (C) Topographical
mapping of N2 difference waveform (200-320 ms poststimulus). (D)
Spatial distribution of the difference between conflict and neutral
situations as a function of time window (Study 2)
Figure 31.
(A) Grand-average N2 as a function of rate of errors in conflict
132
situations and decision situation. Time zero corresponds to the
presentation of sample evidence. (B) Topographical mapping of N2
difference waveform (neutral minus conflict) (200-320 ms
poststimulus) as a function of rate of errors in conflict situations (Study
2)
Appendix C
208
Appendix C: Material of Study 1a (in German)
Instructions for the low-incentive group in counterbalance condition 1
(note that instructions for the high-incentive group were the same with the exception that
participants received 18 cents per successful draw)
Liebe Versuchsteilnehmerin, lieber Versuchsteilnehmer,
vielen Dank für Ihre Teilnahme an diesem entscheidungstheoretischen Experiment. Hierbei werden
Ihnen auf dem Bildschirm zwei Behälter („Urnen“) präsentiert, in welchen sich blaue und grüne
Kugeln befinden. Ziel des Spiels ist es, möglichst viele blaue Kugeln zu ziehen. Dazu muss nach
bestimmten Regeln möglichst geschickt aus beiden Urnen gezogen werden. Im Folgenden werden
zunächst die Elemente auf dem Bildschirm und die Bedienung über Tastatur erklärt. Anschließend
folgen die genauen Spielregeln.
Bildschirm
Das Spiel besteht aus mehreren Durchgängen, bei denen Kugeln aus Urnen gezogen werden.
Bei jedem Durchgang werden Ihnen zwei Urnen angezeigt, die jeweils sechs Kugeln enthalten.
Es gibt blaue und grüne Kugeln. Am Bildschirm werden sie schwarz angezeigt, solange sie
„verdeckt“ in der Urne sind. Unter den Urnen wird angezeigt, welche Kugeln Sie in diesem
Durchgang bereits gezogen haben. Im obigen Beispiel also eine blaue und eine grüne.
Beachten Sie: Wenn eine Urne nicht schwarz, sondern grau angezeigt wird, können Sie gerade
nicht aus ihr ziehen und müssen deshalb die andere wählen.
Appendix C
209
Vor jedem Durchgang erscheint eine Tabelle auf dem Bildschirm, die zentrale Informationen
aus den Spielregeln zusammenfasst und Sie informiert, wie viele Durchgänge bereits absolviert
wurden.
Bedienung
Die Bedienung erfolgt ausschließlich über drei Tasten. Zwei Tasten sind auf der Tastatur grün
markiert. Mit der linken („F“) ziehen Sie eine Kugel aus der linken Urne und entsprechend mit der
rechten („J“) eine aus der rechten Urne. Mit der Leertaste bewegen Sie sich durch einen Durchlauf
bzw. starten einen neuen Durchlauf, wenn Sie auf dem Bildschirm dazu aufgefordert werden.
Beachten Sie hierbei, dass es nach dem Ziehen einer Kugel und nach dem Erscheinen der
gezogenen Kugel jeweils eine kurze Pause gibt, während der der Computer nicht auf die Tasten
reagiert. Drücken Sie bitte erst wieder eine der Tasten, wenn Sie auf dem Bildschirm dazu
aufgefordert werden, und halten Sie währenddessen den Blick auf das Fixationskreuz in der Mitte
des Bildschirms gerichtet. Wichtig für die EEG-Messung ist zudem, dass Sie sich während des
gesamten Ablaufs möglichst wenig bewegen. Lassen Sie bitte die beiden Zeigefinger über der „F“bzw. „J“-Taste sowie die Daumen über der Leertaste.
Spielregeln
Ablauf
Insgesamt werden 60 Durchläufe absolviert, von denen jeder aus zwei Zügen besteht. Zu Beginn
jedes Durchlaufs werden die Gegebenheiten der beiden Umweltbedingungen kurz
zusammengefasst. Danach ziehen Sie per Tastendruck eine Kugel aus einer Urne. Während der
ersten 30 Durchläufe ist die Urne, aus welcher der erste Zug erfolgt, festgelegt. Beim zweiten Zug
dürfen Sie zwischen beiden Urnen frei wählen. In den folgenden 30 Durchgängen kann auch beim
ersten Zug frei gewählt werden. Der Zufall bestimmt, welche der Kugeln aus der Urne gezogen wird.
Das Ergebnis (blau oder grün) wird unter der Urne angezeigt und Sie dürfen erneut ziehen. Auch
das Ergebnis des zweiten Zugs wird unter der entsprechenden Urne angezeigt. Anschließend
beginnen Sie einen neuen Durchlauf.
Beachten Sie: Eine gezogene Kugel wird sofort wieder in die Urne zurückgelegt! Durch das Ziehen
einer Kugel verändert sich also zwischen erstem und zweitem Versuch nicht die Anzahl der blauen
und grünen Kugeln in den Urnen. Sie ziehen beide Male aus den absolut selben Urnen.
Appendix C
210
Gewinnmöglichkeit
Für jede blaue Kugel, die Sie ziehen, bekommen Sie 9 Cent gutgeschrieben; für das Ziehen einer
grünen Kugel bekommen Sie nichts. Durch geschicktes Ziehen und mit ein bisschen Glück können
Sie in diesem Experiment also eine beträchtliche Summe verdienen!
Umweltbedingungen
Das Wichtigste an diesem Spiel ist daher, dass Sie verstehen, wie viele blaue und grüne Kugeln
sich in den Urnen befinden. Lesen Sie die folgenden Abschnitte also besonders aufmerksam durch!
Im Experiment werden zwei Umweltbedingungen unterschieden:
In der ersten Umweltbedingung
befinden sich in der linken Urne 4 blaue
und 2 grüne Kugeln, in der rechten Urne
befinden sich 6 blaue Kugeln.
linke Urne
rechte Urne
In der zweiten Umweltbedingung
befinden sich in der linken Urne 2 blaue
und 4 grüne Kugeln, in der rechten Urne
befinden sich 6 grüne Kugeln.
linke Urne
rechte Urne
Wie bereits erwähnt, müssen Sie in jedem Durchlauf zuerst eine Kugel aus einer der beiden Urnen
ziehen, die dann wieder zurückgelegt wird, und Sie ziehen danach die nächste Kugel. Allerdings
wissen Sie dabei nicht, ob für den jeweiligen Durchlauf die erste oder die zweite Umweltbedingung
herrscht. Dies wird für jeden einzelnen Durchlauf per Zufall durch den Zufallsgenerator des
Computers festgelegt. Die Chance dafür, in einem Durchlauf die erste oder die zweite
Umweltbedingung vorzufinden, liegt jedes Mal bei 50 %. Während eines Durchlaufs verändert sich
die Umweltbedingung nicht!
In der folgenden Tabelle ist die Verteilung der blauen und grünen Kugeln unter den verschiedenen
Umweltbedingungen nochmals zusammenfassend dargestellt. Diese Tabelle wird Ihnen bei jedem
neuen Durchgang angezeigt.
Bedingung
Chance je
Durchlauf
Linke Urne
Rechte Urne
eins
50 %
4 blaue,
2 grüne
6 blaue
zwei
50 %
2 blaue,
4 grüne
6 grüne
Um möglichst oft eine blaue Kugel zu ziehen und damit möglichst viel Geld zu verdienen, ist es
wichtig, dass Sie die obige Tabelle wirklich verstanden haben. Bei Unklarheiten dazu oder zum
Versuch generell fragen Sie bitte bei der Versuchsleiterin nach.
Appendix C
211
Zur Erinnerung: Für jede gezogene blaue Kugel werden Ihnen 9 Cent gutgeschrieben.
Im Folgenden werden Sie zunächst 3 Übungsdurchgänge absolvieren, um sich mit dem Ablauf des
Experiments vertraut zu machen.
Viel Spaß und viel Erfolg!
Appendix C
212
Instructions for the low-incentive group in counterbalance condition 2
(note that instructions for the high-incentive group were the same with the exception that
participants received 18 cents per successful draw)
Liebe Versuchsteilnehmerin, lieber Versuchsteilnehmer,
vielen Dank für Ihre Teilnahme an diesem entscheidungstheoretischen Experiment. Hierbei werden
Ihnen auf dem Bildschirm zwei Behälter („Urnen“) präsentiert, in welchen sich grüne und blaue
Kugeln befinden. Ziel des Spiels ist es, möglichst viele grüne Kugeln zu ziehen. Dazu muss nach
bestimmten Regeln möglichst geschickt aus beiden Urnen gezogen werden. Im Folgenden werden
zunächst die Elemente auf dem Bildschirm und die Bedienung über Tastatur erklärt. Anschließend
folgen die genauen Spielregeln.
Bildschirm
Das Spiel besteht aus mehreren Durchgängen, bei denen Kugeln aus Urnen gezogen werden.
Bei jedem Durchgang werden Ihnen zwei Urnen angezeigt, die jeweils sechs Kugeln enthalten.
Es gibt grüne und blaue Kugeln. Am Bildschirm werden sie schwarz angezeigt, solange sie
„verdeckt“ in der Urne sind. Unter den Urnen wird angezeigt, welche Kugeln Sie in diesem
Durchgang bereits gezogen haben. Im obigen Beispiel also eine blaue und eine grüne.
Beachten Sie: Wenn eine Urne nicht schwarz, sondern grau angezeigt wird, können Sie gerade
nicht aus ihr ziehen und müssen deshalb die andere wählen.
Appendix C
213
Vor jedem Durchgang erscheint eine Tabelle auf dem Bildschirm, die zentrale Informationen
aus den Spielregeln zusammenfasst und Sie informiert, wie viele Durchgänge bereits absolviert
wurden.
Bedienung
Die Bedienung erfolgt ausschließlich über drei Tasten. Zwei Tasten sind auf der Tastatur grün
markiert. Mit der linken („F“) ziehen Sie eine Kugel aus der linken Urne und entsprechend mit der
rechten („J“) eine aus der rechten Urne. Mit der Leertaste bewegen Sie sich durch einen Durchlauf
bzw. starten einen neuen Durchlauf, wenn Sie auf dem Bildschirm dazu aufgefordert werden.
Beachten Sie hierbei, dass es nach dem Ziehen einer Kugel und nach dem Erscheinen der
gezogenen Kugel jeweils eine kurze Pause gibt, während der der Computer nicht auf die Tasten
reagiert. Drücken Sie bitte erst wieder eine der Tasten, wenn Sie auf dem Bildschirm dazu
aufgefordert werden, und halten Sie währenddessen den Blick auf das Fixationskreuz in der Mitte
des Bildschirms gerichtet. Wichtig für die EEG-Messung ist zudem, dass Sie sich während des
gesamten Ablaufs möglichst wenig bewegen. Lassen Sie bitte die beiden Zeigefinger über der „F“bzw. „J“-Taste sowie die Daumen über der Leertaste.
Spielregeln
Ablauf
Insgesamt werden 60 Durchläufe absolviert, von denen jeder aus zwei Zügen besteht. Zu Beginn
jedes Durchlaufs werden die Gegebenheiten der beiden Umweltbedingungen kurz
zusammengefasst. Danach ziehen Sie per Tastendruck eine Kugel aus einer Urne. Während der
ersten 30 Durchläufe ist die Urne, aus welcher der erste Zug erfolgt, festgelegt. Beim zweiten Zug
dürfen Sie zwischen beiden Urnen frei wählen. In den folgenden 30 Durchgängen kann auch beim
ersten Zug frei gewählt werden. Der Zufall bestimmt, welche der Kugeln aus der Urne gezogen wird.
Das Ergebnis (grün oder blau) wird unter der Urne angezeigt und Sie dürfen erneut ziehen. Auch
das Ergebnis des zweiten Zugs wird unter der entsprechenden Urne angezeigt. Anschließend
beginnen Sie einen neuen Durchlauf.
Beachten Sie: Eine gezogene Kugel wird sofort wieder in die Urne zurückgelegt! Durch das Ziehen
einer Kugel verändert sich also zwischen erstem und zweitem Versuch nicht die Anzahl der grünen
und blauen Kugeln in den Urnen. Sie ziehen beide Male aus den absolut selben Urnen.
Gewinnmöglichkeit
Für jede grüne Kugel, die Sie ziehen, bekommen Sie 9 Cent gutgeschrieben; für das Ziehen einer
blauen Kugel bekommen Sie nichts. Durch geschicktes Ziehen und mit ein bisschen Glück können
Sie in diesem Experiment also eine beträchtliche Summe verdienen!
Umweltbedingungen
Das Wichtigste an diesem Spiel ist daher, dass Sie verstehen, wie viele grüne und blaue Kugeln
sich in den Urnen befinden. Lesen Sie die folgenden Abschnitte also besonders aufmerksam durch!
Im Experiment werden zwei Umweltbedingungen unterschieden:
Appendix C
214
In der ersten Umweltbedingung
befinden sich in der linken Urne 4 grüne
und 2 blaue Kugeln, in der rechten Urne
befinden sich 6 grüne Kugeln.
linke Urne
rechte Urne
In der zweiten Umweltbedingung
befinden sich in der linken Urne 2 grüne
und 4 blaue Kugeln, in der rechten Urne
befinden sich 6 blaue Kugeln.
linke Urne
rechte Urne
Wie bereits erwähnt, müssen Sie in jedem Durchlauf zuerst eine Kugel aus einer der beiden Urnen
ziehen, die dann wieder zurückgelegt wird, und Sie ziehen danach die nächste Kugel. Allerdings
wissen Sie dabei nicht, ob für den jeweiligen Durchlauf die erste oder die zweite Umweltbedingung
herrscht. Dies wird für jeden einzelnen Durchlauf per Zufall durch den Zufallsgenerator des
Computers festgelegt. Die Chance dafür, in einem Durchlauf die erste oder die zweite
Umweltbedingung vorzufinden, liegt jedes Mal bei 50 %. Während eines Durchlaufs verändert sich
die Umweltbedingung nicht!
In der folgenden Tabelle ist die Verteilung der grünen und blauen Kugeln unter den verschiedenen
Umweltbedingungen nochmals zusammenfassend dargestellt. Diese Tabelle wird Ihnen bei jedem
neuen Durchgang angezeigt.
Bedingung
Chance je
Durchlauf
Linke Urne
Rechte Urne
eins
50 %
4 grüne,
2 blaue
6 grüne
zwei
50 %
2 grüne,
4 blaue
6 blaue
Um möglichst oft eine grüne Kugel zu ziehen und damit möglichst viel Geld zu verdienen, ist es
wichtig, dass Sie die obige Tabelle wirklich verstanden haben. Bei Unklarheiten dazu oder zum
Versuch generell fragen Sie bitte bei der Versuchsleiterin nach.
Zur Erinnerung: Für jede gezogene grüne Kugel werden Ihnen 9 Cent gutgeschrieben.
Im Folgenden werden Sie zunächst 3 Übungsdurchgänge absolvieren, um sich mit dem Ablauf des
Experiments vertraut zu machen.
Viel Spaß und viel Erfolg!
Appendix C
215
Experimental protocol for both incentive groups and both counterbalance conditions
VPNR:
Cond.:
CB:
VL:
Klar?
VP erklärt Aufgabe
kaum
völlig
klar
Ziehen mit Zurücklegen/Unabhängigkeit der Züge
Bedeutung der Umweltbedingungen und wann sie
sich ändern
Beide Urnen ab Trial 30, vorher nur links
Bezahlungsmodus (welche Farbe, Centbetrag)
Ungefährer erwarteter Gewinn:
Bemerkung:
_____________
klar nach
Erkl.
Problematisch
Appendix C
216
Final questionnaire for both incentive groups in counterbalance condition 1
(note that questions for counterbalance condition 2 were the same with the exception that
participants were asked about the payment for a green ball at the end of the questionnaire)
Bitte geben Sie an, wie Sie jetzt gerade in diesem Moment die blaue bzw. die grüne Kugel
empfinden. Empfindungen können danach unterschieden werden, wie sehr sie zum einen
positiv bzw. negativ sind und zum anderen aufgeregt oder eher entspannt empfunden werden.
Kreuzen Sie bitte in jeder Zeile das Männchen an, das am besten Ihren Zustand wiedergibt,
wenn Sie an die blaue bzw. grüne Kugel denken.
Blaue Kugel
positiv
negativ
aufgeregt
entspannt
Grüne Kugel
positiv
negativ
aufgeregt
entspannt
Appendix C
217
Abschlussfragebogen
Bitte beantworten Sie die folgenden Fragen zu Ihrer Person spontan, ohne lange nachzudenken!
IHRE DATEN WERDEN ABSOLUT ANONYM BEHANDELT!
Falls Sie Fragen haben, wenden Sie sich bitte an die Versuchsleiterin!
1.
Manchmal glaube ich, dass ich zu überhaupt nichts gut bin.
stimme ich
zu
2.
Ich bin mir oft nichts wert.
stimme ich
zu
3.
stimme ich
nicht zu
Wenn ich mich mit gleichaltrigen Personen vergleiche, schneide ich eigentlich ganz gut ab.
stimme ich
zu
9.
stimme ich
nicht zu
Manchmal fühle ich mich zu nichts nütze.
stimme ich
zu
8.
stimme ich
nicht zu
Ich wollte, ich könnte mehr Achtung vor mir haben.
stimme ich
zu
7.
stimme ich
nicht zu
Manchmal weiß ich nicht, wofür ich eigentlich lebe.
stimme ich
zu
6.
stimme ich
nicht zu
Eigentlich bin ich mit mir ganz zufrieden.
stimme ich
zu
5.
stimme ich
nicht zu
Ich kann mich manchmal selbst nicht leiden.
stimme ich
zu
4.
stimme ich
nicht zu
stimme ich
nicht zu
Ich finde mich ganz in Ordnung.
stimme ich
zu
stimme ich
nicht zu
Appendix C
218
10. Ich bin zufrieden mit mir.
stimme ich
zu
stimme ich
nicht zu
11. Wie glücklich fühlen Sie sich jetzt im Moment?
gar nicht
glücklich
sehr
glücklich
12. Wie euphorisch fühlen Sie sich jetzt im Moment?
gar nicht
euphorisch
sehr
euphorisch
13. Wie zufrieden fühlen Sie sich jetzt im Moment?
gar nicht
zufrieden
sehr
zufrieden
14. Wie niedergeschlagen fühlen Sie sich jetzt im Moment?
sehr niedergeschlagen
nicht niedergeschlagen
15. Wie aufgeregt fühlen Sie sich jetzt im Moment?
gar nicht
aufgeregt
sehr
aufgeregt
16. Wie traurig fühlen Sie sich jetzt im Moment?
gar nicht
traurig
sehr traurig
17. Wie einsam fühlen Sie sich jetzt im Moment?
gar nicht
einsam
sehr
einsam
18. Wie besorgt fühlen Sie sich jetzt im Moment?
gar nicht
besorgt
sehr
besorgt
19. Ich kann mir über andere sehr schnell einen Eindruck bilden.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix C
219
20. Ich akzeptiere die Dinge meist lieber so wie sie sind, anstatt sie zu hinterfragen.
vollkommen
unzutreffend
vollkommen
zutreffend
21. Ich finde es nicht sonderlich aufregend, neue Denkweisen zu lernen.
vollkommen
unzutreffend
vollkommen
zutreffend
22. Abstrakt zu denken reizt mich nicht.
vollkommen
unzutreffend
vollkommen
zutreffend
23. Ich denke, dass ich den Charakter einer Person sehr gut nach ihrer äußeren Erscheinung beurteilen kann.
vollkommen
unzutreffend
vollkommen
zutreffend
24. Mein erster Eindruck von anderen ist fast immer zutreffend.
vollkommen
unzutreffend
vollkommen
zutreffend
25. Das Denken in neuen und unbekannten Situationen fällt mir schwer.
vollkommen
unzutreffend
vollkommen
zutreffend
26. Wenn die Frage ist, ob ich anderen vertrauen soll, entscheide ich normalerweise aus dem Bauch heraus.
vollkommen
unzutreffend
vollkommen
zutreffend
27. Der erste Einfall ist oft der beste.
vollkommen
unzutreffend
vollkommen
zutreffend
28. Die Vorstellung, mich auf mein Denkvermögen zu verlassen, um es zu etwas zu bringen, spricht mich
nicht an.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix C
220
29. Bei den meisten Entscheidungen ist es sinnvoll, sich auf sein Gefühl zu verlassen.
vollkommen
unzutreffend
vollkommen
zutreffend
30. Ich würde lieber etwas tun, das wenig Denken erfordert, als etwas, das mit Sicherheit meine
Denkfähigkeit herausfordert.
vollkommen
unzutreffend
vollkommen
zutreffend
31. Ich bin ein sehr intuitiver Mensch.
vollkommen
unzutreffend
vollkommen
zutreffend
32. Ich vertraue meinen unmittelbaren Reaktionen auf andere.
vollkommen
unzutreffend
vollkommen
zutreffend
33. Ich finde wenig Befriedigung darin, angestrengt stundenlang nachzudenken.
vollkommen
unzutreffend
vollkommen
zutreffend
34. Es genügt, dass etwas funktioniert, mir ist egal, wie oder warum.
vollkommen
unzutreffend
vollkommen
zutreffend
35. Es genügt mir, einfach die Antwort zu kennen, ohne die Gründe für die Antwort auf ein Problem zu
verstehen.
vollkommen
unzutreffend
vollkommen
zutreffend
36. Wenn es um Menschen geht, kann ich meinem unmittelbaren Gefühl vertrauen.
vollkommen
unzutreffend
vollkommen
zutreffend
37. Bei Kaufentscheidungen entscheide ich oft aus dem Bauch heraus.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix C
221
38. Ich trage nicht gern die Verantwortung für eine Situation, die sehr viel Denken erfordert.
vollkommen
unzutreffend
vollkommen
zutreffend
39. Wenn ich mich (mit dem Auto/Rad) verfahren habe, entscheide ich mich an Straßenkreuzungen meist
ganz spontan, in welche Richtung ich weiterfahre.
vollkommen
unzutreffend
vollkommen
zutreffend
40. Ich versuche, Situationen vorauszuahnen und zu vermeiden, in denen die Wahrscheinlichkeit groß ist,
dass ich intensiv über etwas nachdenken muss.
vollkommen
unzutreffend
vollkommen
zutreffend
41. Wenn ich eine Aufgabe erledigt habe, die viel geistige Anstrengung erfordert hat, fühle ich mich eher
erleichtert als befriedigt.
vollkommen
unzutreffend
vollkommen
zutreffend
42. Ich glaube, ich kann meinen Gefühlen vertrauen.
vollkommen
unzutreffend
vollkommen
zutreffend
43. Ich erkenne meistens, ob eine Person recht oder unrecht hat, auch wenn ich nicht erklären kann, warum.
vollkommen
unzutreffend
vollkommen
zutreffend
44. Ich spüre es meistens sofort, wenn jemand lügt.
vollkommen
unzutreffend
vollkommen
zutreffend
45. Denken entspricht nicht dem, was ich unter Spaß verstehe.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix C
222
46. Wenn ich mir eine Meinung zu einer Sache bilden soll, verlasse ich mich ganz auf meine Intuition.
vollkommen
unzutreffend
vollkommen
zutreffend
47. Ich würde lieber eine Aufgabe lösen, die Intelligenz erfordert, schwierig und bedeutend ist, als eine
Aufgabe, die zwar irgendwie wichtig ist, aber nicht viel Nachdenken erfordert.
vollkommen
unzutreffend
vollkommen
zutreffend
48. Ich habe schwere Zeiten hinter mir, in denen ich versuchte, meine schlechten Gewohnheiten in den Griff
zu bekommen.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
49. Ich bin faul.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
50. Ich sage Dinge, die unangemessen sind.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
51. Ich tue Dinge, die mir Spaß machen, auch wenn sie schlecht für mich sind.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
52. Ich lehne Dinge ab, die schlecht für mich sind.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
53. Ich wollte, ich hätte mehr Selbstdisziplin
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
Appendix C
223
54. Ich kann Versuchungen gut widerstehen.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
55. Bekannte und Freunde sagen, dass ich eine eiserne Selbstdisziplin habe.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
56. Ich habe im Allgemeinen Konzentrationsprobleme.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
57. Ich bin in der Lage, effektiv an der Erreichung von langfristigen Zielen zu arbeiten.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
58. Manchmal kann ich mich nicht davon abhalten etwas zu tun, von dem ich genau weiß, dass es schlecht
für mich ist.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
59. Ich handle häufig ohne alle alternativen Handlungsmöglichkeiten in Betracht zu ziehen.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
60. Freude und Spaß halten mich manchmal davon ab, meine Arbeit zu erledigen.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
Appendix C
224
Bitte beantworten Sie die folgenden Fragen spontan, ohne lange nachzudenken!
1.
Wie sehr bemühten Sie sich darum, das Experiment präzise durchzuführen?
gar nicht
2.
Wie schwierig fanden Sie das Experiment?
gar nicht
3.
sehr
Stellen Sie sich bitte Folgendes vor: Sie haben gerade ein Fernsehgerät für 500 € gekauft. Ein Bekannter
erzählt Ihnen, dass er nur 450 € für genau das gleiche Gerät bezahlt hat. Wie stark würden Sie sich
deswegen ärgern?
gar nicht
7.
sehr
Stellen Sie sich vor, Sie hätten 30 € verloren. Wie stark würden Sie den Verlust bedauern?
gar nicht
6.
sehr
Stellen Sie sich vor, Sie hätten 150 € im Lotto gewonnen. Wie stark würden Sie sich über den Gewinn
freuen?
gar nicht
5.
sehr
Sehen Sie sich generell in der Lage schwierige Aufgaben gut meistern zu können?
gar nicht
4.
sehr
sehr
Wie wichtig war es Ihnen im heutigen Experiment gut abzuschneiden?
gar nicht
wichtig
sehr
wichtig
Nennen Sie uns kurz die Gründe, warum es Ihnen (nicht) wichtig war, gut abzuschneiden:
Appendix C
8.
225
Wie schätzen Sie Ihre allgemeinen statistischen Vorkenntnisse ein?
sehr
schlecht
sehr gut
Wodurch haben Sie diese ggf. erworben?
9.
Wie schätzen Sie Ihre Fähigkeiten in der Berechnung von Wahrscheinlichkeiten ein?
sehr
schlecht
sehr gut
10. Haben Sie bereits Erfahrung mit dieser Art von Entscheidungssituationen/Szenarien?
Ja
Nein
Falls ja, in welchem Zusammenhang haben Sie diese gemacht?
11. Auf welchen Überlegungen beruhten Ihre Entscheidungen während der Studie?
12. Erinnern Sie sich noch, welcher Betrag Ihnen im Experiment für jede gezogene blaue Kugel
gutgeschrieben wurde?
__________ Cent
13. Fühlten Sie sich während der EEG-Messung durch irgendwelche äußeren Einflüsse gestört?
Ja
Falls ja, durch welche?
Nein
Appendix C
226
Bitte stellen Sie uns für die wissenschaftliche Auswertung noch einige Informationen über Ihre Person
zur Verfügung.
1.
Ihr Alter:
___________________________
2.
Ihr Geschlecht:
3.
Muttersprache:
___________________________
4.
Hauptfach:
___________________________
5.
Nebenfach:
___________________________
6.
momentan besuchtes Fachsemester:
___________________________
7.
Ihre Leistungskurse im Abi?
1.____________
8.
Ihr Notendurchschnitt im Abi?
___________________________
9.
Das Bundesland, in dem Sie Abi gemacht haben?
___________________________
weiblich
männlich
2.__________
10. Ihre letzte Mathematiknote?
___________________________
11. Ihr Berufswunsch?
___________________________
Ihre Daten werden absolut ANONYM behandelt!
Vielen Dank für Ihre Mitarbeit!
Appendix D
227
Appendix D: Material of Study 1b (in German)
Instructions for the low-incentive group
(note that instructions for the high-incentive group were the same with the exception that
participants received 36 cents per successful draw)
Liebe Versuchsteilnehmerin, lieber Versuchsteilnehmer,
vielen Dank für Ihre Teilnahme an diesem entscheidungstheoretischen Experiment. Hierbei werden
Ihnen auf dem Bildschirm zwei Behälter („Urnen“) präsentiert, in welchen sich blaue und grüne
Kugeln befinden. Ziel des Spiels ist es, möglichst viele „Gewinnerkugeln“ zu ziehen, wobei in
jedem Durchgang neu festgelegt werden wird, ob blau oder grün gewinnt. In jedem Fall muss nach
bestimmten Regeln möglichst geschickt aus beiden Urnen gezogen werden. Im Folgenden werden
zunächst die Elemente auf dem Bildschirm und die Bedienung über Tastatur erklärt. Anschließend
folgen die genauen Spielregeln.
Bildschirm
Das Spiel besteht aus mehreren Durchgängen, bei denen Kugeln aus Urnen gezogen werden.
Bei jedem Durchgang werden Ihnen zwei Urnen angezeigt, die jeweils sechs Kugeln enthalten.
Es gibt blaue und grüne Kugeln. Am Bildschirm werden sie schwarz angezeigt, solange sie
„verdeckt“ in der Urne sind. Unter den Urnen wird angezeigt, welche Kugeln Sie in diesem
Durchgang bereits gezogen haben. Im obigen Beispiel also eine blaue und eine grüne.
Beachten Sie: Wenn eine Urne nicht schwarz, sondern grau angezeigt wird, können Sie gerade
nicht aus ihr ziehen und müssen deshalb die andere wählen.
Appendix D
228
Vor jedem Durchgang erscheint eine Tabelle auf dem Bildschirm, die zentrale Informationen
aus den Spielregeln zusammenfasst und Sie informiert, wie viele Durchgänge bereits absolviert
wurden.
Bedienung
Die Bedienung erfolgt ausschließlich über drei Tasten. Zwei Tasten sind auf der Tastatur grün
markiert. Mit der linken („F“) ziehen Sie eine Kugel aus der linken Urne und entsprechend mit der
rechten („J“) eine aus der rechten Urne. Mit der Leertaste bewegen Sie sich durch einen Durchlauf
bzw. starten einen neuen Durchlauf, wenn Sie auf dem Bildschirm dazu aufgefordert werden.
Beachten Sie hierbei, dass es nach dem Ziehen einer Kugel und nach dem Erscheinen der
gezogenen Kugel jeweils eine kurze Pause gibt, während der der Computer nicht auf die Tasten
reagiert. Drücken Sie bitte erst wieder eine der Tasten, wenn Sie auf dem Bildschirm dazu
aufgefordert werden, und halten Sie währenddessen den Blick auf das Fixationskreuz in der Mitte
des Bildschirms gerichtet. Wichtig für die EEG-Messung ist zudem, dass Sie sich während des
gesamten Ablaufs möglichst wenig bewegen. Lassen Sie bitte die beiden Zeigefinger über der „F“bzw. „J“-Taste sowie die Daumen über der Leertaste.
Spielregeln
Ablauf
Insgesamt werden 60 Durchläufe absolviert, von denen jeder aus zwei Zügen besteht. Zu Beginn
jedes Durchlaufs werden die Gegebenheiten der beiden Umweltbedingungen kurz
zusammengefasst. Danach ziehen Sie per Tastendruck eine Kugel aus der linken Urne. Der Zufall
bestimmt, welche der Kugeln aus der Urne gezogen wird. Das Ergebnis (blau oder grün) wird unter
der Urne angezeigt und Sie dürfen erneut ziehen, wobei jetzt beide Urnen zur Wahl stehen. Auch
das Ergebnis des zweiten Zugs wird unter der entsprechenden Urne angezeigt. Anschließend
beginnen Sie einen neuen Durchlauf.
Beachten Sie: Eine gezogene Kugel wird sofort wieder in die Urne zurückgelegt! Durch das Ziehen
einer Kugel verändert sich also zwischen erstem und zweitem Versuch nicht die Anzahl der blauen
und grünen Kugeln in den Urnen. Sie ziehen beide Male aus den absolut selben Urnen.
Gewinnmöglichkeit
Nach dem ersten Zug bekommen Sie angezeigt, welche Farbe im zweiten Zug gewinnt. Wenn Sie
dann eine Kugel dieser Farbe ziehen, bekommen Sie 18 Cent gutgeschrieben; für das Ziehen einer
Kugel der anderen Farbe bekommen Sie nichts. Durch geschicktes Ziehen und mit ein bisschen
Glück können Sie in diesem Experiment also eine beträchtliche Summe verdienen!
Appendix D
229
Umweltbedingungen
Das Wichtigste an diesem Spiel ist daher, dass Sie verstehen, wie viele blaue und grüne Kugeln
sich in den Urnen befinden. Lesen Sie die folgenden Abschnitte also besonders aufmerksam durch!
Im Experiment werden zwei Umweltbedingungen unterschieden:
In der ersten Umweltbedingung
befinden sich in der linken Urne 4 blaue
und 2 grüne Kugeln, in der rechten Urne
befinden sich 6 blaue Kugeln.
linke Urne
rechte Urne
In der zweiten Umweltbedingung
befinden sich in der linken Urne 2 blaue
und 4 grüne Kugeln, in der rechten Urne
befinden sich 6 grüne Kugeln.
linke Urne
rechte Urne
Wie bereits erwähnt, müssen Sie in jedem Durchlauf zuerst eine Kugel aus der linken Urne ziehen,
die dann wieder zurückgelegt wird, und Sie ziehen danach die nächste Kugel aus einer der beiden
Urnen. Allerdings wissen Sie dabei nicht, ob für den jeweiligen Durchlauf die erste oder die zweite
Umweltbedingung herrscht. Dies wird für jeden einzelnen Durchlauf per Zufall durch den
Zufallsgenerator des Computers festgelegt. Die Chance dafür, in einem Durchlauf die erste oder die
zweite Umweltbedingung vorzufinden, liegt jedes Mal bei 50 %. Während eines Durchlaufs
verändert sich die Umweltbedingung nicht!
In der folgenden Tabelle ist die Verteilung der blauen und grünen Kugeln unter den verschiedenen
Umweltbedingungen nochmals zusammenfassend dargestellt. Diese Tabelle wird Ihnen bei jedem
neuen Durchgang angezeigt.
Bedingung
Chance je
Durchlauf
Linke Urne
Rechte Urne
eins
50 %
4 blaue,
2 grüne
6 blaue
zwei
50 %
2 blaue,
4 grüne
6 grüne
Um möglichst oft eine Gewinnerkugel zu ziehen und damit möglichst viel Geld zu verdienen, ist es
wichtig, dass Sie die obige Tabelle wirklich verstanden haben. Bei Unklarheiten dazu oder zum
Versuch generell fragen Sie bitte bei der Versuchsleiterin nach.
Zur Erinnerung: Für jede gezogene Gewinnerkugel werden Ihnen 18 Cent gutgeschrieben.
Im Folgenden werden Sie zunächst 3 Übungsdurchgänge absolvieren, um sich mit dem Ablauf des
Experiments vertraut zu machen.
Viel Spaß und viel Erfolg!
Appendix D
230
Experimental protocol for both incentive groups
VPNR:
Cond.:
VL:
Klar?
VP erklärt Aufgabe
kaum
klar
Kugelverteilung
Ziehen mit Zurücklegen/Unabhängigkeit der Züge
Bedeutung der Umweltbedingungen und wann sie
sich ändern
Erster Zug immer links
Bezahlungsmodus (welche Farbe, Centbetrag)
Ungefährer erwarteter Gewinn:
Bemerkung:
_____________
völlig
klar nach
Erkl.
Problematisch
Appendix D
231
Final questionnaire for both incentive groups
Bitte geben Sie an, wie Sie jetzt gerade in diesem Moment die blaue bzw. die grüne Kugel
empfinden. Empfindungen können danach unterschieden werden, wie sehr sie zum einen
positiv bzw. negativ sind und zum anderen aufgeregt oder eher entspannt empfunden werden.
Kreuzen Sie bitte in jeder Zeile das Männchen an, das am besten Ihren Zustand wiedergibt,
wenn Sie an die blaue bzw. grüne Kugel denken.
Blaue Kugel
positiv
negativ
aufgeregt
entspannt
Grüne Kugel
positiv
negativ
aufgeregt
entspannt
Appendix D
232
Abschlussfragebogen
Bitte beantworten Sie die folgenden Fragen zu Ihrer Person spontan, ohne lange nachzudenken!
IHRE DATEN WERDEN ABSOLUT ANONYM BEHANDELT!
Falls Sie Fragen haben, wenden Sie sich bitte an die Versuchsleiterin!
1.
Manchmal glaube ich, dass ich zu überhaupt nichts gut bin.
stimme ich
zu
2.
Ich bin mir oft nichts wert.
stimme ich
zu
3.
stimme ich
nicht zu
Wenn ich mich mit gleichaltrigen Personen vergleiche, schneide ich eigentlich ganz gut ab.
stimme ich
zu
9.
stimme ich
nicht zu
Manchmal fühle ich mich zu nichts nütze.
stimme ich
zu
8.
stimme ich
nicht zu
Ich wollte, ich könnte mehr Achtung vor mir haben.
stimme ich
zu
7.
stimme ich
nicht zu
Manchmal weiß ich nicht, wofür ich eigentlich lebe.
stimme ich
zu
6.
stimme ich
nicht zu
Eigentlich bin ich mit mir ganz zufrieden.
stimme ich
zu
5.
stimme ich
nicht zu
Ich kann mich manchmal selbst nicht leiden.
stimme ich
zu
4.
stimme ich
nicht zu
stimme ich
nicht zu
Ich finde mich ganz in Ordnung.
stimme ich
zu
stimme ich
nicht zu
Appendix D
233
10. Ich bin zufrieden mit mir.
stimme ich
zu
stimme ich
nicht zu
11. Wie glücklich fühlen Sie sich jetzt im Moment?
gar nicht
glücklich
sehr
glücklich
12. Wie euphorisch fühlen Sie sich jetzt im Moment?
gar nicht
euphorisch
sehr
euphorisch
13. Wie zufrieden fühlen Sie sich jetzt im Moment?
gar nicht
zufrieden
sehr
zufrieden
14. Wie niedergeschlagen fühlen Sie sich jetzt im Moment?
sehr niedergeschlagen
nicht niedergeschlagen
15. Wie aufgeregt fühlen Sie sich jetzt im Moment?
gar nicht
aufgeregt
sehr
aufgeregt
16. Wie traurig fühlen Sie sich jetzt im Moment?
gar nicht
traurig
sehr traurig
17. Wie einsam fühlen Sie sich jetzt im Moment?
gar nicht
einsam
sehr
einsam
18. Wie besorgt fühlen Sie sich jetzt im Moment?
gar nicht
besorgt
sehr
besorgt
19. Ich kann mir über andere sehr schnell einen Eindruck bilden.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix D
234
20. Ich akzeptiere die Dinge meist lieber so wie sie sind, anstatt sie zu hinterfragen.
vollkommen
unzutreffend
vollkommen
zutreffend
21. Ich finde es nicht sonderlich aufregend, neue Denkweisen zu lernen.
vollkommen
unzutreffend
vollkommen
zutreffend
22. Abstrakt zu denken reizt mich nicht.
vollkommen
unzutreffend
vollkommen
zutreffend
23. Ich denke, dass ich den Charakter einer Person sehr gut nach ihrer äußeren Erscheinung beurteilen kann.
vollkommen
unzutreffend
vollkommen
zutreffend
24. Mein erster Eindruck von anderen ist fast immer zutreffend.
vollkommen
unzutreffend
vollkommen
zutreffend
25. Das Denken in neuen und unbekannten Situationen fällt mir schwer.
vollkommen
unzutreffend
vollkommen
zutreffend
26. Wenn die Frage ist, ob ich anderen vertrauen soll, entscheide ich normalerweise aus dem Bauch heraus.
vollkommen
unzutreffend
vollkommen
zutreffend
27. Der erste Einfall ist oft der beste.
vollkommen
unzutreffend
vollkommen
zutreffend
28. Die Vorstellung, mich auf mein Denkvermögen zu verlassen, um es zu etwas zu bringen, spricht mich
nicht an.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix D
235
29. Bei den meisten Entscheidungen ist es sinnvoll, sich auf sein Gefühl zu verlassen.
vollkommen
unzutreffend
vollkommen
zutreffend
30. Ich würde lieber etwas tun, das wenig Denken erfordert, als etwas, das mit Sicherheit meine
Denkfähigkeit herausfordert.
vollkommen
unzutreffend
vollkommen
zutreffend
31. Ich bin ein sehr intuitiver Mensch.
vollkommen
unzutreffend
vollkommen
zutreffend
32. Ich vertraue meinen unmittelbaren Reaktionen auf andere.
vollkommen
unzutreffend
vollkommen
zutreffend
33. Ich finde wenig Befriedigung darin, angestrengt stundenlang nachzudenken.
vollkommen
unzutreffend
vollkommen
zutreffend
34. Es genügt, dass etwas funktioniert, mir ist egal, wie oder warum.
vollkommen
unzutreffend
vollkommen
zutreffend
35. Es genügt mir, einfach die Antwort zu kennen, ohne die Gründe für die Antwort auf ein Problem zu
verstehen.
vollkommen
unzutreffend
vollkommen
zutreffend
36. Wenn es um Menschen geht, kann ich meinem unmittelbaren Gefühl vertrauen.
vollkommen
unzutreffend
vollkommen
zutreffend
37. Bei Kaufentscheidungen entscheide ich oft aus dem Bauch heraus.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix D
236
38. Ich trage nicht gern die Verantwortung für eine Situation, die sehr viel Denken erfordert.
vollkommen
unzutreffend
vollkommen
zutreffend
39. Wenn ich mich (mit dem Auto/Rad) verfahren habe, entscheide ich mich an Straßenkreuzungen meist
ganz spontan, in welche Richtung ich weiterfahre.
vollkommen
unzutreffend
vollkommen
zutreffend
40. Ich versuche, Situationen vorauszuahnen und zu vermeiden, in denen die Wahrscheinlichkeit groß ist,
dass ich intensiv über etwas nachdenken muss.
vollkommen
unzutreffend
vollkommen
zutreffend
41. Wenn ich eine Aufgabe erledigt habe, die viel geistige Anstrengung erfordert hat, fühle ich mich eher
erleichtert als befriedigt.
vollkommen
unzutreffend
vollkommen
zutreffend
42. Ich glaube, ich kann meinen Gefühlen vertrauen.
vollkommen
unzutreffend
vollkommen
zutreffend
43. Ich erkenne meistens, ob eine Person recht oder unrecht hat, auch wenn ich nicht erklären kann, warum.
vollkommen
unzutreffend
vollkommen
zutreffend
44. Ich spüre es meistens sofort, wenn jemand lügt.
vollkommen
unzutreffend
vollkommen
zutreffend
45. Denken entspricht nicht dem, was ich unter Spaß verstehe.
vollkommen
unzutreffend
vollkommen
zutreffend
46. Wenn ich mir eine Meinung zu einer Sache bilden soll, verlasse ich mich ganz auf meine Intuition.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix D
237
47. Ich würde lieber eine Aufgabe lösen, die Intelligenz erfordert, schwierig und bedeutend ist, als eine
Aufgabe, die zwar irgendwie wichtig ist, aber nicht viel Nachdenken erfordert.
vollkommen
unzutreffend
vollkommen
zutreffend
48. Ich habe schwere Zeiten hinter mir, in denen ich versuchte, meine schlechten Gewohnheiten in den Griff
zu bekommen.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
49. Ich bin faul.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
50. Ich sage Dinge, die unangemessen sind.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
51. Ich tue Dinge, die mir Spaß machen, auch wenn sie schlecht für mich sind.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
52. Ich lehne Dinge ab, die schlecht für mich sind.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
53. Ich wollte, ich hätte mehr Selbstdisziplin
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
54. Ich kann Versuchungen gut widerstehen.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
55. Bekannte und Freunde sagen, dass ich eine eiserne Selbstdisziplin habe.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
Appendix D
238
56. Ich habe im Allgemeinen Konzentrationsprobleme.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
57. Ich bin in der Lage, effektiv an der Erreichung von langfristigen Zielen zu arbeiten.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
58. Manchmal kann ich mich nicht davon abhalten etwas zu tun, von dem ich genau weiß, dass es schlecht
für mich ist.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
59. Ich handle häufig ohne alle alternativen Handlungsmöglichkeiten in Betracht zu ziehen.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
60. Freude und Spaß halten mich manchmal davon ab, meine Arbeit zu erledigen.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
Ich bin jemand, der …
61. … gründlich arbeitet.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
62. … kommunikativ, gesprächig ist.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
63. … manchmal etwas grob zu anderen ist.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
64. … originell ist, neue Ideen einbringt.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
Appendix D
239
65. … sich oft Sorgen macht.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
66. … verzeihen kann.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
67. … eher faul ist.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
68. … aus sich herausgehen kann, gesellig ist.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
69. … künstlerische Erfahrungen schätzt.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
70. … leicht nervös wird.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
71. … Aufgaben wirksam und effizient erledigt.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
72. … zurückhaltend ist.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
73. … rücksichtsvoll und freundlich mit anderen umgeht.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
Appendix D
240
74. … eine lebhafte Fantasie, Vorstellungen hat.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
75. … entspannt ist, mit Stress umgehen kann.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
Bitte beantworten Sie die folgenden Fragen spontan, ohne lange nachzudenken!
1.
Wie sehr bemühten Sie sich darum, das Experiment präzise durchzuführen?
gar nicht
2.
Wie schwierig fanden Sie das Computerspiel?
gar nicht
3.
sehr
Wie anstrengend fanden Sie die gesamte Studie (von Instruktionen bis Ausfüllen des Fragebogens)?
gar nicht
8.
sehr
In welchem Maße empfanden Sie das Tragen der EEG-Kappe und das Gel auf der Kopfhaut als
unangenehm?
gar nicht
7.
sehr
Wie schwierig fanden Sie es, entsprechend den Instruktionen Bewegungen und Blinzeln zu vermeiden?
gar nicht
6.
sehr
Wie anstrengend fanden Sie das Experiment insgesamt, d.h. während der EEG-Messung Entscheidungen
zu treffen?
gar nicht
5.
sehr
Sehen Sie sich generell in der Lage, schwierige Aufgaben gut meistern zu können?
gar nicht
4.
sehr
sehr
Stellen Sie sich vor, Sie hätten 150 € im Lotto gewonnen. Wie stark würden Sie sich über den Gewinn
freuen?
gar nicht
sehr
Appendix D
9.
241
Stellen Sie sich vor, Sie hätten 30 € verloren. Wie stark würden Sie den Verlust bedauern?
gar nicht
sehr
10. Stellen Sie sich bitte Folgendes vor: Sie haben gerade ein Fernsehgerät für 500 € gekauft. Ein Bekannter
erzählt Ihnen, dass er nur 450 € für genau das gleiche Gerät bezahlt hat. Wie stark würden Sie sich
deswegen ärgern?
gar nicht
sehr
11. Wie viel Geld brauchen Sie in einem durchschnittlichen Monat unbedingt zum Leben (mit Miete)?
__________€
12. Wie sehr hat Sie die Möglichkeit, durch geschickte Entscheidungen Geld hinzu zu gewinnen, zur
Teilnahme an diesem Experiment motiviert?
gar nicht
sehr
13. Wie sehr hat Sie die Möglichkeit, Geld hinzu zu gewinnen, dazu angespornt, im Experiment möglichst
gut abzuschneiden?
gar nicht
sehr
14. Wie wichtig war es Ihnen im heutigen Experiment gut abzuschneiden?
gar nicht
wichtig
sehr
wichtig
Nennen Sie uns kurz die Gründe, warum es Ihnen (nicht) wichtig war, gut abzuschneiden:
15. Wie schätzen Sie Ihre allgemeinen statistischen Vorkenntnisse ein?
sehr
schlecht
Wodurch haben Sie diese ggf. erworben?
sehr gut
Appendix D
242
16. Wie schätzen Sie Ihre Fähigkeiten in der Berechnung von Wahrscheinlichkeiten ein?
sehr
schlecht
sehr gut
17. Haben Sie bereits Erfahrung mit dieser Art von Entscheidungssituationen/Szenarien?
Ja
Nein
Falls ja, in welchem Zusammenhang haben Sie diese gemacht?
18. Auf welchen Überlegungen beruhten Ihre Entscheidungen während der Studie?
19. Hatten Sie Schwierigkeiten, sich beim zweiten Zug an die aktuelle Gewinnfarbe zu erinnern, die nach
dem ersten Zug bekannt gegeben worden war?
Ja
Nein
Falls ja, wie oft kam es vor, dass Sie beim zweiten Zug die aktuelle Gewinnfarbe nicht wussten?
sehr selten
sehr häufig
20. Erinnern Sie sich noch, welcher Betrag Ihnen im Experiment für jede gezogene Gewinnerkugel
gutgeschrieben wurde?
__________ Cent
21. Fühlten Sie sich während der EEG-Messung durch irgendwelche äußeren Einflüsse gestört?
Ja
Falls ja, durch welche?
Nein
Appendix D
243
Bitte stellen Sie uns für die wissenschaftliche Auswertung noch einige Informationen über Ihre Person
zur Verfügung.
1.
Ihr Alter:
___________________________
2.
Ihr Geschlecht:
3.
Muttersprache:
___________________________
4.
Hauptfach:
___________________________
5.
Nebenfach:
___________________________
6.
momentan besuchtes Fachsemester:
___________________________
7.
Ihre Leistungskurse im Abi?
___________________________
8.
Ihr Notendurchschnitt im Abi?
___________________________
9.
Das Bundesland, in dem Sie Abi gemacht haben?
___________________________
weiblich
männlich
10. Ihre letzte Mathematiknote?
___________________________
11. Ihr Berufswunsch?
___________________________
Ihre Daten werden absolut ANONYM behandelt!
Vielen Dank für Ihre Mitarbeit!
Appendix E
244
Appendix E: Material of Study 2 (in German)
Instructions for counterbalance condition 1
Liebe Versuchsteilnehmerin, lieber Versuchsteilnehmer,
vielen Dank für Ihre Teilnahme an diesem entscheidungstheoretischen Experiment. Hierbei gibt es
zwei „Urnen“, in welchen sich jeweils eine unterschiedliche Anzahl blauer und grüner Kugeln
befinden. Aus einer dieser Urnen zieht der Computer 4 Kugeln mit Zurücklegen. Ziel ist es,
möglichst oft richtig zu schätzen, aus welcher der beiden Urnen die Kugeln gezogen wurden.
Ablauf und Bildschirm
Das Experiment besteht aus 6 Teilen, in denen jeweils 100 Durchläufe absolviert werden. Nach
jedem Teil gibt es eine kurze Pause von zwei Minuten.
Vor Beginn jeden Durchlaufs zieht der Computer zufällig eine Zahl zwischen 1 und 4, die
bestimmt, aus welcher von zwei möglichen Urnen er anschließend 4-mal mit Zurücklegen zieht.
In jedem Durchlauf erscheinen zunächst für 2 Sekunden folgende Informationen in der Mitte des
Bildschirms:
Zum einen wird die Verteilung von blauen und grünen Kugeln in den beiden Urnen angezeigt.
Diese ist für alle Durchgänge die gleiche: In der linken Urne befinden sich 3 blaue und 1 grüne
Kugel, in der rechten Urne befinden sich 2 blaue und 2 grüne Kugeln.
Zusätzlich wird durch die Zahlen zwischen den Urnen angezeigt, nach welcher Regel der Computer
im aktuellen Durchlauf eine der beiden Urnen auswählt (diese Regel ändert sich von Durchlauf zu
Durchlauf!). Im obigen Beispiel zieht er aus der linken Urne, falls er zuvor die Zahl 1 gezogen hat,
und aus der rechten Urne, falls er zuvor die Zahl 2, 3 oder 4 gezogen hat.
Nach 2 Sekunden verschwindet diese Information wieder. Der Computer zieht 4-mal aus der Urne,
die er gewählt hat, wobei jede gezogene Kugel sofort wieder in die Urne zurückgelegt wird.
Durch das Ziehen einer Kugel verändert sich also nicht die Anzahl der blauen und grünen
Kugeln in der Urne. Der Zufall bestimmt, welche der Kugeln aus der Urne gezogen wird. Die 4
gezogenen Kugeln werden untereinander auf dem Bildschirm angezeigt. Im dargestellten Beispiel
wurden 1 blaue und 3 grüne Kugeln gezogen:
Appendix E
245
Ihre Aufgabe besteht darin, zu schätzen, aus welcher der beiden Urnen die Kugeln gezogen
wurden. Per Tastendruck können Sie sich für die linke oder rechte Urne entscheiden. Wenn Sie
Ihre Entscheidung getroffen haben, beginnt nach einer kurzen Pause der nächste Durchlauf.
Nach jeweils 100 Durchgängen gibt es eine Pause von 2 Minuten. Wenn die 2 Minuten verstrichen
sind, beginnen Sie entsprechend dem Hinweis auf dem Bildschirm durch Betätigen der Leertaste
einen neuen Durchlauf.
Bedienung
Die Bedienung erfolgt ausschließlich über drei Tasten. Zwei Tasten sind auf der Tastatur gelb
markiert. Mit der linken („F“) entscheiden Sie sich für die linke Urne und entsprechend mit der
rechten („J“) für die rechte Urne. Mit der Leertaste starten Sie einen neuen Durchlauf, wenn Sie auf
dem Bildschirm dazu aufgefordert werden.
Bitte halten Sie Ihren Blick immer möglichst auf das Fixationsquadrat in der Mitte des Bildschirms
gerichtet. Wichtig für die EEG-Messung ist zudem, dass Sie sich während des gesamten Ablaufs
möglichst wenig bewegen. Lassen Sie bitte die beiden Zeigefinger über der „F“- bzw. „J“-Taste.
Gewinnmöglichkeit
Für jede richtige Entscheidung, die Sie treffen, bekommen Sie 6 Cent gutgeschrieben; für eine
falsche Entscheidung bekommen Sie nichts. Durch geschickte Entscheidungen können Sie in
diesem Experiment also eine beträchtliche Summe verdienen!
Kugelverteilung und Urnenauswahl
Wichtig ist daher, dass Sie verstehen, wie viele blaue und grüne Kugeln sich in den Urnen
befinden, und nach welchen Regeln der Computer die Urne auswählt, aus der er zieht. Lesen Sie
den folgenden Abschnitt also besonders aufmerksam durch!
Im Experiment gibt es 2 Urnen: In der linken Urne befinden sich 3 blaue und 1 grüne Kugel, in der
rechten Urne befinden sich 2 blaue und 2 grüne Kugeln. Wie bereits erwähnt, zieht der Computer in
jedem Durchlauf zuerst eine Zahl zwischen 1 und 4, durch welche bestimmt wird, aus welcher
Urne anschließend 4-mal mit Zurücklegen gezogen wird. Allerdings wissen Sie nicht, welche Zahl
zwischen 1 und 4 gezogen wurde. Dies wird für jeden einzelnen Durchlauf per Zufall durch den
Zufallsgenerator des Computers festgelegt. Sie werden in jedem Durchgang über die Regel
informiert, nach welcher eine der beiden Urnen ausgewählt wird (im obigen Beispiel: linke Urne
bei 1; rechte Urne bei 2, 3 oder 4). Beachten Sie, dass sich diese Regel von Durchgang zu
Durchgang ändern kann (z.B. linke Urne bei 1 oder 2; rechte Urne bei 3 oder 4)! Nur die
Verteilung der blauen und grünen Kugeln in den Urnen bleibt über alle Durchgänge hinweg die
gleiche.
Appendix E
246
Um möglichst oft eine richtige Entscheidung zu treffen und damit möglichst viel Geld zu
verdienen, ist es wichtig, dass Sie die obigen Informationen wirklich verstanden haben. Bei
Unklarheiten dazu oder zum Versuch generell fragen Sie bitte bei der Versuchsleiterin nach.
Zur Erinnerung: Für jede richtige Entscheidung werden Ihnen 6 Cent gutgeschrieben.
Im Folgenden werden Sie zunächst einige Übungsdurchgänge absolvieren, um sich mit dem Ablauf
des Experiments vertraut zu machen.
Viel Spaß und viel Erfolg!
Appendix E
247
Instructions for counterbalance condition 2
Liebe Versuchsteilnehmerin, lieber Versuchsteilnehmer,
vielen Dank für Ihre Teilnahme an diesem entscheidungstheoretischen Experiment. Hierbei gibt es
zwei „Urnen“, in welchen sich jeweils eine unterschiedliche Anzahl grüner und blauer Kugeln
befinden. Aus einer dieser Urnen zieht der Computer 4 Kugeln mit Zurücklegen. Ziel ist es,
möglichst oft richtig zu schätzen, aus welcher der beiden Urnen die Kugeln gezogen wurden.
Ablauf und Bildschirm
Das Experiment besteht aus 6 Teilen, in denen jeweils 100 Durchläufe absolviert werden. Nach
jedem Teil gibt es eine kurze Pause von zwei Minuten.
Vor Beginn jeden Durchlaufs zieht der Computer zufällig eine Zahl zwischen 1 und 4, die
bestimmt, aus welcher von zwei möglichen Urnen er anschließend 4-mal mit Zurücklegen zieht.
In jedem Durchlauf erscheinen zunächst für 2 Sekunden folgende Informationen in der Mitte des
Bildschirms:
Zum einen wird die Verteilung von grünen und blauen Kugeln in den beiden Urnen angezeigt.
Diese ist für alle Durchgänge die gleiche: In der linken Urne befinden sich 3 grüne und 1 blaue
Kugel, in der rechten Urne befinden sich 2 grüne und 2 blaue Kugeln.
Zusätzlich wird durch die Zahlen zwischen den Urnen angezeigt, nach welcher Regel der Computer
im aktuellen Durchlauf eine der beiden Urnen auswählt (diese Regel ändert sich von Durchlauf zu
Durchlauf!). Im obigen Beispiel zieht er aus der linken Urne, falls er zuvor die Zahl 1 gezogen hat,
und aus der rechten Urne, falls er zuvor die Zahl 2, 3 oder 4 gezogen hat.
Nach 2 Sekunden verschwindet diese Information wieder. Der Computer zieht 4-mal aus der Urne,
die er gewählt hat, wobei jede gezogene Kugel sofort wieder in die Urne zurückgelegt wird.
Durch das Ziehen einer Kugel verändert sich also nicht die Anzahl der grünen und blauen
Kugeln in der Urne. Der Zufall bestimmt, welche der Kugeln aus der Urne gezogen wird. Die 4
gezogenen Kugeln werden untereinander auf dem Bildschirm angezeigt. Im dargestellten Beispiel
wurden 1 grüne und 3 blaue Kugeln gezogen:
Appendix E
248
Ihre Aufgabe besteht darin, zu schätzen, aus welcher der beiden Urnen die Kugeln gezogen
wurden. Per Tastendruck können Sie sich für die linke oder rechte Urne entscheiden. Wenn Sie
Ihre Entscheidung getroffen haben, beginnt nach einer kurzen Pause der nächste Durchlauf.
Nach jeweils 100 Durchgängen gibt es eine Pause von 2 Minuten. Wenn die 2 Minuten verstrichen
sind, beginnen Sie entsprechend dem Hinweis auf dem Bildschirm durch Betätigen der Leertaste
einen neuen Durchlauf.
Bedienung
Die Bedienung erfolgt ausschließlich über drei Tasten. Zwei Tasten sind auf der Tastatur gelb
markiert. Mit der linken („F“) entscheiden Sie sich für die linke Urne und entsprechend mit der
rechten („J“) für die rechte Urne. Mit der Leertaste starten Sie einen neuen Durchlauf, wenn Sie auf
dem Bildschirm dazu aufgefordert werden.
Bitte halten Sie Ihren Blick immer möglichst auf das Fixationsquadrat in der Mitte des Bildschirms
gerichtet. Wichtig für die EEG-Messung ist zudem, dass Sie sich während des gesamten Ablaufs
möglichst wenig bewegen. Lassen Sie bitte die beiden Zeigefinger über der „F“- bzw. „J“-Taste.
Gewinnmöglichkeit
Für jede richtige Entscheidung, die Sie treffen, bekommen Sie 6 Cent gutgeschrieben; für eine
falsche Entscheidung bekommen Sie nichts. Durch geschickte Entscheidungen können Sie in
diesem Experiment also eine beträchtliche Summe verdienen!
Kugelverteilung und Urnenauswahl
Wichtig ist daher, dass Sie verstehen, wie viele grüne und blaue Kugeln sich in den Urnen
befinden, und nach welchen Regeln der Computer die Urne auswählt, aus der er zieht. Lesen Sie
den folgenden Abschnitt also besonders aufmerksam durch!
Im Experiment gibt es 2 Urnen: In der linken Urne befinden sich 3 grüne und 1 blaue Kugel, in der
rechten Urne befinden sich 2 grüne und 2 blaue Kugeln. Wie bereits erwähnt, zieht der Computer in
jedem Durchlauf zuerst eine Zahl zwischen 1 und 4, durch welche bestimmt wird, aus welcher
Urne anschließend 4-mal mit Zurücklegen gezogen wird. Allerdings wissen Sie nicht, welche Zahl
zwischen 1 und 4 gezogen wurde. Dies wird für jeden einzelnen Durchlauf per Zufall durch den
Zufallsgenerator des Computers festgelegt. Sie werden in jedem Durchgang über die Regel
informiert, nach welcher eine der beiden Urnen ausgewählt wird (im obigen Beispiel: linke Urne
bei 1; rechte Urne bei 2, 3 oder 4). Beachten Sie, dass sich diese Regel von Durchgang zu
Durchgang ändern kann (z.B. linke Urne bei 1 oder 2; rechte Urne bei 3 oder 4)! Nur die
Verteilung der grünen und blauen Kugeln in den Urnen bleibt über alle Durchgänge hinweg die
gleiche.
Um möglichst oft eine richtige Entscheidung zu treffen und damit möglichst viel Geld zu
verdienen, ist es wichtig, dass Sie die obigen Informationen wirklich verstanden haben. Bei
Unklarheiten dazu oder zum Versuch generell fragen Sie bitte bei der Versuchsleiterin nach.
Zur Erinnerung: Für jede richtige Entscheidung werden Ihnen 6 Cent gutgeschrieben.
Im Folgenden werden Sie zunächst einige Übungsdurchgänge absolvieren, um sich mit dem Ablauf
des Experiments vertraut zu machen.
Viel Spaß und viel Erfolg!
Appendix E
249
Experimental protocol for both counterbalance conditions
VPNR:
CB:
VL:
Klar?
kaum
klar
Entscheidungsregel abhängig von Zufallszahl –
Änderung!!
Kugelverteilung
Ziehen mit Zurücklegen
Aufgabe: Aus welcher Urne wurde gezogen
Ablauf am Bildschirm
Bezahlungsmodus
Ungefährer erwarteter Gewinn:
Bemerkung:
_____________
völlig
klar nach
Erkl.
problematisch
Appendix E
250
Final questionnaire for both counterbalance conditions
Abschlussfragebogen
Bitte beantworten Sie die folgenden Fragen zu Ihrer Person spontan, ohne lange nachzudenken!
IHRE DATEN WERDEN ABSOLUT ANONYM BEHANDELT!
Falls Sie Fragen haben, wenden Sie sich bitte an die Versuchsleiterin!
1.
Ich kann mir über andere sehr schnell einen Eindruck bilden.
vollkommen
unzutreffend
2.
Ich akzeptiere die Dinge meist lieber so wie sie sind, anstatt sie zu hinterfragen.
vollkommen
unzutreffend
3.
vollkommen
zutreffend
Mein erster Eindruck von anderen ist fast immer zutreffend.
vollkommen
unzutreffend
7.
vollkommen
zutreffend
Ich denke, dass ich den Charakter einer Person sehr gut nach ihrer äußeren Erscheinung beurteilen kann.
vollkommen
unzutreffend
6.
vollkommen
zutreffend
Abstrakt zu denken reizt mich nicht.
vollkommen
unzutreffend
5.
vollkommen
zutreffend
Ich finde es nicht sonderlich aufregend, neue Denkweisen zu lernen.
vollkommen
unzutreffend
4.
vollkommen
zutreffend
vollkommen
zutreffend
Das Denken in neuen und unbekannten Situationen fällt mir schwer.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix E
8.
Wenn es darum geht, ob ich anderen vertrauen soll, entscheide ich aus dem Bauch heraus.
vollkommen
unzutreffend
9.
251
vollkommen
zutreffend
Der erste Einfall ist oft der beste.
vollkommen
unzutreffend
vollkommen
zutreffend
10. Die Vorstellung, mich auf mein Denkvermögen zu verlassen, um es zu etwas zu bringen, spricht mich
nicht an.
vollkommen
unzutreffend
vollkommen
zutreffend
11. Bei den meisten Entscheidungen ist es sinnvoll, sich ganz auf sein Gefühl zu verlassen.
vollkommen
unzutreffend
vollkommen
zutreffend
12. Ich würde lieber etwas tun, das wenig Denken erfordert, als etwas, das mit Sicherheit meine
Denkfähigkeit herausfordert.
vollkommen
unzutreffend
vollkommen
zutreffend
13. Ich bin ein sehr intuitiver Mensch.
vollkommen
unzutreffend
vollkommen
zutreffend
14. Ich vertraue meinen unmittelbaren Reaktionen auf andere.
vollkommen
unzutreffend
vollkommen
zutreffend
15. Ich finde wenig Befriedigung darin, angestrengt stundenlang nachzudenken.
vollkommen
unzutreffend
vollkommen
zutreffend
16. Es genügt, dass etwas funktioniert, mir ist egal, wie oder warum.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix E
252
17. Es genügt mir, einfach die Antwort zu kennen, ohne die Gründe für die Antwort auf ein Problem zu
verstehen.
vollkommen
unzutreffend
vollkommen
zutreffend
18. Wenn es um Menschen geht, kann ich meinem unmittelbaren Gefühl vertrauen.
vollkommen
unzutreffend
vollkommen
zutreffend
19. Bei Kaufentscheidungen entscheide ich oft aus dem Bauch heraus.
vollkommen
unzutreffend
vollkommen
zutreffend
20. Ich trage nicht gern die Verantwortung für eine Situation, die sehr viel Denken erfordert.
vollkommen
unzutreffend
vollkommen
zutreffend
21. Wenn ich mich (mit dem Auto/Rad) verfahren habe, entscheide ich mich an Straßenkreuzungen meist
ganz spontan, in welche Richtung ich weiterfahre.
vollkommen
unzutreffend
vollkommen
zutreffend
22. Ich versuche, Situationen vorauszuahnen und zu vermeiden, in denen die Wahrscheinlichkeit groß ist,
dass ich intensiv über etwas nachdenken muss.
vollkommen
unzutreffend
vollkommen
zutreffend
23. Wenn ich eine Aufgabe erledigt habe, die viel geistige Anstrengung erfordert hat, fühle ich mich eher
erleichtert als befriedigt.
vollkommen
unzutreffend
vollkommen
zutreffend
24. Ich glaube, ich kann meinen Gefühlen vertrauen.
vollkommen
unzutreffend
vollkommen
zutreffend
25. Ich erkenne meistens, ob eine Person recht oder unrecht hat, auch wenn ich nicht erklären kann, warum.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix E
253
26. Ich spüre es meistens sofort, wenn jemand lügt.
vollkommen
unzutreffend
vollkommen
zutreffend
27. Denken entspricht nicht dem, was ich unter Spaß verstehe.
vollkommen
unzutreffend
vollkommen
zutreffend
28. Wenn ich mir eine Meinung zu einer Sache bilden soll, verlasse ich mich ganz auf meine Intuition.
vollkommen
unzutreffend
vollkommen
zutreffend
29. Ich würde lieber eine Aufgabe lösen, die Intelligenz erfordert, schwierig und bedeutend ist, als eine
Aufgabe, die zwar irgendwie wichtig ist, aber nicht viel Nachdenken erfordert.
vollkommen
unzutreffend
vollkommen
zutreffend
30. Bevor ich Entscheidungen treffe, denke ich meistens erst mal gründlich nach.
vollkommen
unzutreffend
vollkommen
zutreffend
31. Ich beobachte sorgfältig meine innersten Gefühle.
vollkommen
unzutreffend
vollkommen
zutreffend
32. Bevor ich Entscheidungen treffe, denke ich meistens erst mal über meine Ziele nach, die ich erreichen
will.
vollkommen
unzutreffend
vollkommen
zutreffend
33. Ich mag Situationen nicht, in denen ich mich auf meine Intuition verlassen muss.
vollkommen
unzutreffend
vollkommen
zutreffend
34. Ich denke über mich nach.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix E
254
35. Ich schmiede lieber ausgefeilte Pläne, als etwas dem Zufall zu überlassen.
vollkommen
unzutreffend
vollkommen
zutreffend
36. Ich ziehe Schlussfolgerungen lieber aufgrund meiner Gefühle, Menschenkenntnis und Lebenserfahrung.
vollkommen
unzutreffend
vollkommen
zutreffend
37. Bei meinen Entscheidungen spielen Gefühle eine große Rolle.
vollkommen
unzutreffend
vollkommen
zutreffend
38. Ich bin perfektionistisch.
vollkommen
unzutreffend
vollkommen
zutreffend
39. Wenn ich eine Entscheidung rechtfertigen muss, denke ich vorher besonders gründlich nach.
vollkommen
unzutreffend
vollkommen
zutreffend
40. Ich nehme bei einem Problem erst mal die harten Fakten und Details auseinander, bevor ich mich
entscheide.
vollkommen
unzutreffend
vollkommen
zutreffend
41. Ich denke erst nach, bevor ich handle.
vollkommen
unzutreffend
vollkommen
zutreffend
42. Ich mag lieber gefühlsbetonte Personen.
vollkommen
unzutreffend
vollkommen
zutreffend
43. Ich denke über meine Pläne und Ziele stärker nach als andere Menschen.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix E
255
44. Ich mag emotionale Situationen, Diskussionen und Filme.
vollkommen
unzutreffend
vollkommen
zutreffend
45. Wie mein Leben verläuft, hängt von mir selbst ab.
vollkommen
unzutreffend
vollkommen
zutreffend
46. Im Vergleich mit anderen habe ich nicht das erreicht, was ich verdient habe.
vollkommen
unzutreffend
vollkommen
zutreffend
47. Was man im Leben erreicht, ist in erster Linie eine Frage von Schicksal oder Glück.
vollkommen
unzutreffend
vollkommen
zutreffend
48. Wenn man sich sozial oder politisch engagiert, kann man die sozialen Verhältnisse beeinflussen.
vollkommen
unzutreffend
vollkommen
zutreffend
49. Ich mache häufig die Erfahrung, dass andere über mein Leben bestimmen.
vollkommen
unzutreffend
vollkommen
zutreffend
50. Erfolg muss man sich hart erarbeiten.
vollkommen
unzutreffend
vollkommen
zutreffend
51. Wenn ich im Leben auf Schwierigkeiten stoße, zweifle ich oft an meinen Fähigkeiten.
vollkommen
unzutreffend
vollkommen
zutreffend
52. Welche Möglichkeiten ich im Leben habe, wird von den sozialen Umständen bestimmt.
vollkommen
unzutreffend
vollkommen
zutreffend
Appendix E
256
53. Wichtiger als alle Anstrengungen sind die Fähigkeiten, die man mitbringt.
vollkommen
unzutreffend
vollkommen
zutreffend
54. Ich habe wenig Kontrolle über die Dinge, die in meinem Leben passieren.
vollkommen
unzutreffend
vollkommen
zutreffend
Ich bin jemand, der …
55. … gründlich arbeitet.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
56. … kommunikativ, gesprächig ist.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
57. … manchmal etwas grob zu anderen ist.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
58. … originell ist, neue Ideen einbringt.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
59. … sich oft Sorgen macht.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
60. … verzeihen kann.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
61. … eher faul ist.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
Appendix E
257
62. … aus sich herausgehen kann, gesellig ist.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
63. … künstlerische Erfahrungen schätzt.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
64. … leicht nervös wird.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
65. … Aufgaben wirksam und effizient erledigt.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
66. … zurückhaltend ist.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
67. … rücksichtsvoll und freundlich mit anderen umgeht.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
68. … eine lebhafte Fantasie, Vorstellungen hat.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
69. … entspannt ist, mit Stress gut umgehen kann.
trifft überhaupt
nicht auf mich zu
trifft völlig
auf mich zu
Appendix E
258
Bitte beantworten Sie die folgenden Fragen spontan, ohne lange nachzudenken!
1.
Wie verständlich fanden Sie die Instruktionen?
gar nicht
2.
Wie sehr bemühten Sie sich darum, das Experiment präzise durchzuführen?
gar nicht
3.
sehr
Wie anstrengend fanden Sie die gesamte Studie (von Instruktionen bis Ausfüllen des Fragebogens)?
gar nicht
9.
sehr
In welchem Maße empfanden Sie das Tragen der EEG-Kappe und das Gel auf der Kopfhaut als
unangenehm?
gar nicht
8.
sehr
Wie schwierig fanden Sie es, entsprechend den Instruktionen Bewegungen und Blinzeln zu vermeiden?
gar nicht
7.
sehr
Wie anstrengend fanden Sie das Experiment insgesamt, d.h. während der EEG-Messung Entscheidungen
zu treffen?
gar nicht
6.
sehr
Sehen Sie sich generell in der Lage, schwierige Aufgaben gut meistern zu können?
gar nicht
5.
sehr
Wie schwierig fanden Sie das Computerspiel?
gar nicht
4.
sehr
sehr
Stellen Sie sich vor, Sie hätten 150 € im Lotto gewonnen. Wie stark würden Sie sich über den Gewinn
freuen?
gar nicht
sehr
10. Stellen Sie sich vor, Sie hätten 30 € verloren. Wie stark würden Sie den Verlust bedauern?
gar nicht
sehr
Appendix E
259
11. Stellen Sie sich bitte Folgendes vor: Sie haben gerade ein Fernsehgerät für 500 € gekauft. Ein Bekannter
erzählt Ihnen, dass er nur 450 € für genau das gleiche Gerät bezahlt hat. Wie stark würden Sie sich
deswegen ärgern?
gar nicht
sehr
12. Wie viel Geld brauchen Sie in einem durchschnittlichen Monat unbedingt zum Leben (mit Miete)?
__________€
13. Wie sehr hat Sie die Möglichkeit, durch geschickte Entscheidungen Geld hinzu zu gewinnen, zur
Teilnahme an diesem Experiment motiviert?
gar nicht
sehr
14. Wie sehr hat Sie die Möglichkeit, Geld hinzu zu gewinnen, dazu angespornt, im Experiment möglichst
gut abzuschneiden?
gar nicht
sehr
15. Wie wichtig war es Ihnen im heutigen Experiment gut abzuschneiden?
gar nicht
wichtig
sehr
wichtig
Nennen Sie uns kurz die Gründe, warum es Ihnen (nicht) wichtig war, gut abzuschneiden:
16. Wie schätzen Sie Ihre allgemeinen statistischen Vorkenntnisse ein?
sehr
schlecht
sehr gut
Wodurch haben Sie diese ggf. erworben?
17. Wie schätzen Sie Ihre Fähigkeiten in der Berechnung von Wahrscheinlichkeiten ein?
sehr
schlecht
sehr gut
Appendix E
260
18. Haben Sie bereits Erfahrung mit dieser Art von Entscheidungssituationen/Szenarien?
Ja
Nein
Falls ja, in welchem Zusammenhang haben Sie diese gemacht?
19. Erinnern Sie sich noch, welcher Betrag Ihnen im Experiment für jede richtige Entscheidung
gutgeschrieben wurde?
__________ Cent
20. Auf welchen Überlegungen beruhten Ihre Entscheidungen während der Studie?
o
o
o
o
o
o
Sofern beide Urnen der Regel zufolge nicht gleich wahrscheinlich waren, habe ich mich
grundsätzlich für die Urne entschieden, die mit höherer Wahrscheinlichkeit ausgewählt wurde.
Ich habe mich grundsätzlich für die Urne entschieden, zu der die Farben der gezogenen Kugeln eher
gepasst haben.
Meine Entscheidungen beruhten grundsätzlich sowohl auf der Regel, nach der die beiden Urnen
ausgewählt wurden, als auch auf den Farben der gezogenen Kugeln.
Ich habe meistens aus dem Bauch heraus entschieden.
Ich habe die Strategie, auf der meine Entscheidungen beruhten, zwischendurch gewechselt.
Ich hatte eine andere spezielle Strategie für meine Entscheidungen.
Genauer:
21. Hatten Sie Schwierigkeiten, sich die aktuelle Regel zu merken, nach welcher eine der beiden Urnen vom
Computer ausgewählt wurde (z.B. linke Urne bei 1; rechte Urne bei 2,3, oder 4)?
Ja
Nein
Falls ja, wie oft kam es vor, dass Sie beim Betrachten der gezogenen Kugeln die aktuelle Regel nicht
wussten?
sehr selten
sehr häufig
22. Fühlten Sie sich während der EEG-Messung durch irgendwelche äußeren Einflüsse gestört?
Ja
Nein
Appendix E
261
Falls ja, durch welche?
23. Stellen Sie sich vor, ein fairer, sechsseitiger Würfel würde 1000-mal geworfen. Was denken Sie, in wie
vielen von diesen 1000 Würfen eine gerade Zahl (2,4 oder 6) geworfen würde?
__________
24. In einer Lotterie betragen die Chancen auf einen Gewinn von 10 Euro 1%. Was denken Sie, wie viele
Leute einen Gewinn von 10 Euro erhalten würden, wenn 1000 Menschen jeweils ein Los für die Lotterie
kaufen würden?
__________
25. Bei einer Verlosung besteht eine Chance von 1 zu 1000, ein Auto zu gewinnen. Welcher Prozentsatz an
Losen gewinnt ein Auto?
__________
26. Wie gut sind Sie im Bruchrechnen?
überhaupt
nicht gut
extrem gut
27. Wie gut sind Sie im Prozentrechnen?
überhaupt
nicht gut
extrem gut
28. Wie gut sind Sie im Berechnen eines Trinkgelds von 15%?
überhaupt
nicht gut
extrem gut
29. Wie gut sind Sie im Berechnen des Preises eines Shirts, das 25% reduziert ist?
überhaupt
nicht gut
extrem gut
30. Wie hilfreich finden Sie Tabellen und Diagramme als Zusatzinformation beim Lesen von
Zeitungsartikeln?
überhaupt
nicht
extrem
31. Wenn Ihnen jemand die Wahrscheinlichkeit eines möglichen Ereignisses mitteilt, bevorzugen Sie diese
in Worten ausgedrückt (z.B. „es passiert selten“) oder in Zahlen ausgedrückt (z.B. „die
Wahrscheinlichkeit beträgt 1%“)?
bevorzuge
immer
Wörter
bevorzuge
immer
Zahlen
Appendix E
262
32. Wenn Sie sich die Wettervorhersage anhören, bevorzugen sie die Prognosen in Prozent ausgedrückt
(z.B. „die Regenwahrscheinlichkeit beträgt heute 20%“) oder in Worten ausgedrückt (z.B. „die
Regenwahrscheinlichkeit ist heute gering“)?
bevorzuge
immer
Prozent
bevorzuge
immer
Wörter
33. Wie oft finden Sie in Zahlen ausgedrückte Informationen nützlich?
nie
sehr oft
Bitte stellen Sie uns für die wissenschaftliche Auswertung noch einige Informationen über Ihre Person
zur Verfügung.
1.
Ihr Alter:
___________________________
2.
Ihr Geschlecht:
3.
Muttersprache:
___________________________
4.
Hauptfach:
___________________________
5.
Nebenfach:
___________________________
6.
momentan besuchtes Fachsemester:
___________________________
7.
Ihre Leistungskurse im Abi?
___________________________
8.
Ihr Notendurchschnitt im Abi?
___________________________
9.
Das Bundesland, in dem Sie Abi gemacht haben?
___________________________
weiblich
männlich
10. Ihre letzte Mathematiknote?
___________________________
11. Ihr Berufswunsch?
___________________________
Ihre Daten werden absolut ANONYM behandelt!
Vielen Dank für Ihre Mitarbeit!