Robin Thomas

Response Times and Their Use in
the Cognitive Science of Choice
Robin Thomas1, Trish Van Zandt2, Joe
Houpt3, Mario Fific4, & Joe Johnson1
1Miami
University, Oxford, OH
2The Ohio State University, Columbus, OH
3Wright State University, Dayton, OH
4Grand Valley State University, MI
Typical Tasks
• Consider a signal detection experiment: one of two stimuli is
presented, a standard (or noise) and a comparison (or signal)
that differ in intensity on some dimension. The observer must
determine which of two occurred on each trial.
• A decision maker is given two gambles that differ in value and
probability of earnings. Gamble A = 40% chance of winning
$10, 60 % chance of losing $5. Gamble B = 60 % chance of
winning $6, 40 % chance losing $9. Which does he actually
play? How long does it take him to decide?
Typical Tasks
• A participant studies a list of items at time t0. Later, she
is presented with another list of items, some old, some
new. Her task is to indicate whether each item is old or
new.
• A learner trains on examples to discover which objects
belong in one of two categories (e.g., friend or foe,
poisonous or safe, malignant or benign). New examples
are presented to the learner that need to be classified.
• Which city is farther south, Paris or New York? How
confident are you (on a scale from 0 – 100%)?
In every case, we measure both the choice
and the time required to make it.
Typical summary measures
• Mean response times and variance, choice proportions
,
Typical summary measures
• Mean response times and variance, choice proportions
,
• RT densities and distributions (and functions of)
Histogram estimate
of density
Empirical cumulative
distribution function
- from Van Zandt, 2000
- Ashby, et al. 1993
Overview
• Approaches to using response times in cognitive
science
– Macro-process modeling/Mental architectures
• Basic SFT paradigm & data variables
• Dimensions of a Processing System
–
–
–
–
Architectures
Stopping Rules
Capacity
Dependence
• Predictions & Statistical analysis issues
• Empirical example worked out (Johnson, et al., 2010)
– Micro-process modeling/models of RT and accuracy
• Sequential Sampling Basics
–
–
–
–
Random walk
Race models
Diffusion
“Easy versions”
• Beyond simple choices multi-option
• Combining approaches
• Neural evidence
Mental Architectures
Systems Factorial Technology Townsend & Nozawa, 1995) “doublefactorial paradigm” based on Sternberg, 1969, see also Schweickert,
1985, Dzhafarov & Schweickert, 1995)
Mental Architectures
Systems Factorial Technology Townsend & Nozawa, 1995) “doublefactorial paradigm” based on Sternberg, 1969, see also Schweickert,
1985, Dzhafarov & Schweickert, 1995)
Divided attention task:
One stimulus presented
on a trial, observer asked
“Is there an arrow
somewhere in the
stimulus” = OR gate
(also can use an ‘and’
gate version of task, H&T
2010, 2012)
- from Johnson, et al. (2010)
Mental Architectures
Dependent Measure: RT from which interaction contrasts are
formed. Accuracy is not analyzed (often high) or separately
analyzed (Schweickert, 1985).
Mean Interaction Contrast =
– where Rtij refers to the mean response time in the present
conditions in which level of factor A is ‘i’ and the other factor ‘j’
– in the global/local arrow search task, the saliency of local level
arrow relative to dash is first factor, saliency of global level arrow
relative to dash is second factor
Mental Architectures
Dependent Measure: RT from which interaction contrasts are
formed.
Survivor function = S(t) = P( T > t) = 1 – F(t)
where F(t) is the cumulative distribution function.
Survivor Interaction Contrast =
Reaction time histograms
LH
HHHH
Conjunctive-rule classification “AND””
HH
HH
LH
0.008
0.008
0.006
0.006
LH
0.005
0.005
0.006
0.006
Lips-position
0.004
0.004
HH
HL
HL
LH
0.004
0.004
0.003
0.003
Freq
Sharks
LL
0.002
0.002
0.002
0.002
0.001
0.001
200.
200.
Sharks
HL
400.
400.
600.
600.
HL
HL
0.004
0.004
Freq
Jets
200.
200.
800.
800. 1000.
1000. 1200.
1200. 1400.
1400.
LL L
H
400.
400.
600.
600.
800.
800.
1000. 1200. 1400.
1000. 1200. 1400.
LL
0.005
0.005
0.004
0.004
0.003
0.003
0.003
0.003
0.002
0.002
0.002
0.002
0.001
0.001
0.001
0.001
Eye- separation
200.
200.
200.200.
400.400.
600.600.
800.800.
1000.
1200.
1400.
1000.
1200.
1400.
RT(ms)
400.
400.
600.
600.
800.
800.
1000. 1200.
1200. 1400.
1400.
1000.
RT(ms)
Reaction time Survivor functions
0.4
0.2
-
0.6
0.4
0.2
-
LH
0.8
0.6
0.4
0.2
+
LL
0.8
0.6
0.4
0.2
=
S IC
SIC
0.6
HL
0.8
1
P R O B A B IL IT Y
HH
0.8
0.4
1
1
P R O B A B IL IT Y
1
P R O B A B IL IT Y
P R O B A B IL IT Y
How to calculate the survivor interaction contrast
(SIC) function
0.2
0
0.2
50
60
70 80
RT ms
90 100 110
SIC(t) = Shh(t) -
50
60
70 80
RT ms
50
90 100 110
Shl(t)
-
60
70 80
RT ms
90 100 110
(Slh(t)
50
60
70 80
RT ms
- Sll(t))
90 100 110
0.4
0
10
20
30
40
RT ms
50
60
70
Mental Architectures
Dimensions of a processing model
Mental Architectures
Serial Processing
Parallel
Processing
Coactive
- from Johnson, et al. 2010
Mental Architectures
Using the salience
factorial conditions
0.4
HL
0.2
SIC (t)
LH
MIC=0
Lips
Serial
Self-terminating
LL
Mean RT (ms)
A
Architecture flow
diagram
SIC
MIC
HH
0
Input
Eyes
Lips
-0.2
Decision
OR
Response
-0.4
LL
LH
0
MIC=0
0.4
HL
0.2
SIC (t)
Serial
Exhaustive
Eyes
H
Lips
B
Mean RT (ms)
L
HH
20
40
60
RT (ms)
RTbins
10ms
80
0
Input
-0.2
Eyes
Lips
Decision
AND
Response
Decision
OR
Response
Decision
AND
Response
-0.4
Mean RT (ms)
80
SIC (t)
0.2
0
Input
Eyes
Input
Lips
Input
Eyes
Input
Lips
-0.4
LL
H
0
MIC<0
0.4
HL
0.2
LH
SIC (t)
Eyes
20
40
60
RT (ms)
RTbins
10ms
80
0
-0.2
HH
-0.4
Eyes
LL
Coactive
40
60
RT (ms)
RTbins
10ms
-0.2
HH
L
E
HL
20
0.4
0
H
MIC>0
HL
LH
HH
20
40
60
RT (ms)
RTbins
10ms
80
0.4
SIC (t)
Mean RT (ms)
Parallel
Exhaustive
MIC>0
LH
L
D
0
Lips
Parallel
Self-terminating
Mean RT (ms)
LL
H
Lips
C
Eyes
Lips
L
Input
0.2
Input
-0.2
-0.4
L
Eyes
H
Eyes
0
0
20
40
60
RT (ms)
RTbins
10ms
80
Lips
Joe’s
face
Decision
Response
Mental Architectures
Capacity Coefficient:
• Use presence vs absence factorial conditions
• Indicates changes in processing resources due to an
increase in workload (# items/channels)
Single target
conditions
• Where
integrated
hazard
function
and
hazard
function
• Note that
Easy to
estimate
Mental Architectures
Capacity Coefficient:
• Measured against a baseline model UCIP with selftermination
• Unlimited Capacity: No change in resources
available for individual items due to increased
overall workload
• Independent: Stochastic independence
• Parallel: Simultaneous processing of inputs
• Self-terminating: stops at first opportunity
• C(t) = 1 unlimited capacity,
• C(t) > 1 supercapacity
• C(t) < 1 limited capacity
Mental Architectures
Statistical Issues:
Mean interaction contrast (MIC) which can be assessed via
standard factorial ANOVA test of interaction
Survivor interaction contrast (Houpt & Townsend, 2010)
Capacity coefficient (Houpt & Townsend, 2012)
Above are Fisherian. Houpt promises Bayesian approaches
forthcoming ….
Mental Architectures
Empirical Example: Global – local processing in autism
(Johnson, et al., 2010)
Participants: 10 ASD, 11 Controls
Task: indicate if arrow present
Measured response time and
accuracy, RT analyses only
All MIC, SIC, and capacity
analyses performed on individual
participants
In normal visual processing, global precedes and may interfere with local
Single factor reversal (Townsend
& Thomas, 1994) + SIC(t) ->
inhibitory parallel
Mental Architectures
Facilitative parallel exhaustive
Mental Architectures
Coactive or facilitative
parallel
Inhibitory parallel
Mental Architectures
Some super and near
unlimited capacity
Most limited capacity
Models of RT and Accuracy
SFT uses only RT of correct responses – a weakness of the
approach
Important information is also included in error responses
and the probability of each response especially in
classification, memory recognition, decision-making.
Predominant approach – sequential sampling
• At each moment in time, evidence is accrued
according to an underlying stochastic
mechanism until enough to determine a
response, or time-limit has expired
Models of RT and Accuracy
Phenomenon: Speed – accuracy tradeoff
Sequential sampling models
2
1.5
Option A
Evidence State
1
0.5
0
-0.5
-1
Option B
-1.5
-2
0
26
Td 100
200
300
Deliberation Time
400
500
Sequential sampling models
2
1.5
Option A
Evidence State
1
0.5
0
-0.5
-1
Option B
-1.5
-2
0
27
100
Td
200
300
Deliberation Time
400
500
Models of RT and Accuracy
Race (Counter) models (e.g., Merkle & Van Zandt, 2006)
- from Merkle & Van Zandt (2006)
Models of RT and Accuracy
Exemplar-based random walk model of classification learning
(Nosofsky & Palmeri, 1997)
- from Thomas (2006)
Models of RT and Accuracy
Ratcliff’s Diffusion Model (1978, 2002)
Drift rate
distributions,
one for each
stimulus
category
Models of RT and Accuracy
“Easy” Versions
• Offer closed-form solutions for response time and probability predictions
- from Wagenmakers, et al., 2007)
Models of RT and Accuracy
“Easy” Versions
• Offer closed-form solutions for response time and probability predictions
Linear Ballistic
Accumulator
- from Brown & Heathcote, 2008)
Models of RT and Accuracy
Beyond two-choices: Decision Field Theory of Multialternative Decisions (Busemeyer & Townsend, 1993;
Johnson & Busemeyer, 2005, 2008)
- Attention shifts at each moment to a particular dimension
of the decision problem
- An evaluation of each choice alternative is based on relative
values on the focal dimension
- This evaluation is used to update the preference state from
the previous moment
- Preference updating continues until an alternative surpasses
a decision threshold
•
•
•
•
wFac
wRep
wSAT
wAct
0.40
Ratio
0.30
Reputation
0.20
SAT score
0.10
Activities
.923
Adams
1.00
.05
1.00
90
.80
800
.63
50
.834
Buchanan
.80
.04
.78
70
.90
900
1.00
80
.732
Coolidge
.60
.03
.89
80
1.00
1000
.25
20
Attention shifting
Evaluation of relative values
Preference updating
Decision threshold
DFT: Illustration
P(t)
A
θ
B
A
C
B
t
C
Multialternative choice

Y



Z

X
Alternative space
Dimension interpretations
Binary choices
Additional alternatives
Choice pair relations

{X,Y} vs. {X,Y,Z}
Choice phenomena

Y
Similarity


C
Attraction (decoy)

S
DX

= Pr (X|X,C) = Pr (Y|Y,C)
Pr (X|X,Y,D) >
Pr (Y|X,Y,D)
Compromise

Pr (X|X,Y) = Pr (Y|X,Y) = 0.5
Pr (X|X,Y,S) <
Pr (Y|X,Y,S)
Pr (C|X,Y,C) >
Pr (X|X,Y,C) =
Pr (Y|X,Y,C)
DFT: Account for phenomena
x Pr (X)
Pr (Y)
+ Pr (S)
x Pr (X)
Pr (Y)
+ Pr (D)
x Pr (X)
Pr (Y)
+ Pr (C)
Y
C
S
DX
Combining Approaches
Thomas (2006) simulated diffusion models and random walk
models of choice (e.g., EBRW) in a factorial task to derive MIC
predictions
• characterized optimal responding in random walks and
diffusion models in additive factor paradigms
• provided a reinterpretation of previously paradoxical
findings regarding the effects of stimulus probability on
choice RT
Combining Approaches
Combining Approaches
- Fific, et al., 2010
Combining Approaches
- Townsend, et al., 2012, “General recognition theory extended to include response times:
Predictions for a class of parallel systems”, JMP
Neural Evidence
- Smith &
Ratcliff
(2004)
Neural Evidence
Neural Evidence
- from Purcell, et al. 20120
Summary & Conclusions
• Two major approaches to understanding response
times in choice
•
Axiomatic analysis of mental architecture in factorial
paradigms
•
•
•
Parameter free, class-wide applicability
Accuracy information not generally taken into account
(exception, Schweickert’s work)
Micro-process models of both accuracy and decision
time – sequential sampling
•
•
•
Computationally complex – though some ‘EZ’ versions
Parametric
Some efforts to incorporate macro axiomatic logic into
microprocess models
• Neural evidence for information accumulation to a
threshold assumption