What is Cognitive Science? Zenon Pylyshyn, Rutgers Center for Cognitive Science

What is Cognitive Science?
Zenon Pylyshyn, Rutgers Center for Cognitive Science
What’s in the mind? How do we know?
What is special about cognition?
• Cognition (from Latin “cogito”) refers to the capacity to
know, and by extension to reason, perceive, plan, decide,
solve problems, infer the beliefs of others, communicate
by language as well as by other ways, and all the other
capabilities we associated with intelligent activity.
• What is central to all such activity is that it relies on
representations of the (actual or imagined) world.
 Cognitive science is the study of systems that represent and that
use their representations rationally, e.g.,draw inferences.
 A computer is another such system, so computing has become
the basic paradigm of cognitive science.
• In the last 40 years, The Representational Theory of Mind
has become The Computational Theory of Mind
Cognitive science is a delicate mixture
of the obvious and the incredible
Granny was almost right:
Behavior really is governed by what we know and
what we want (together with the mechanisms for
representing and for drawing inferences from these)
It’s emic, not etic properties that matter
Kenneth Pike
What determines our behavior is not how the
world is, but how we represent it as being
 As Chomsky pointed out in his review of Skinner, if
we describe behavior in relation to the objective
properties of the world, we would have to conclude
that behavior is essentially stimulus-independent
 Every behavioral regularity (other than physical ones
like falling) is cognitively penetrable
It’s emic
states that
matter!
The central role of representation creates
some serious problems for a natural science
 What matters is what representations are about
 But how can the fact that a belief is about some
particular thing have an observable consequence?
• How can beliefs about ‘Santa Claus’ or the ‘Holy
Grail’ determine behavior when they don’t exist?
 In a natural science if “X causes Y” then X must
exist and be causally (lawfully) connected to Y!
• Even when X exists, it is not X’s physical properties
that are relevant, it’s what they are perceived as!
 e.g., the North Star & navigation
Is it hopeless to think we can have
a natural science of cognition?
Along comes The computational theory of mind
“the only straw afloat”
The major historical milestones
•
Brentano’s recognition of the problem of
intentionality: Mental States are about something,
but aboutness is not a physical relation. Therefore,
psychology cannot be a natural science.
• The formalist movement in the foundations of
mathematics: Hilbert, Kurt Gödel, Bertrand Russell
& Alfred Whitehead, Alan Turing, Alonzo Church,
… provided a technique by which logical reasoning
could be automated.
• Representational/Computational theory of mind: The
modern era: Newell & Simon, Chomsky, Fodor
So…intelligent systems behave the way
they do because of what the represent
• But in order to function under causal laws,
the representations must be instantiated in
physical properties
• To encode knowledge in physical properties
one first encode it in symbolic form (Proof
Theory tells us how) and then instantiates
those symbolic codes physically (computer
science tells us how)
How to make a purely mechanical system reason about things
it does not ‘understand’ or ‘know about’? Symbolic logic.
(1) Married (John, Mary) or Married (John, Susan)
and the equation or “statement”,
(2) not[Married (John, Susan) ].
from these two statements you can conclude,
(3) Married (John, Mary)
But notice that (3) follows from (1) and (2) regardless of what is in the
parts of the equation not occupied by the terms or or not so that you could
write down the equations without mentioning marriage or John or Mary
or, for that matter, anything having to do with the world. Try replacing
these expressions with the meaningless letters P and Q. The inference still
holds:
(1') P or Q
(2') not Q
therefore,
(3') P
Cognitive Science and the Tri-Level Hypothesis
Intelligent systems are organized at three
(or more) distinct levels:
1. The physical or biological level
2. The symbolic or syntactic level
3. The knowledge or semantic level
This means that different regularities may
require appeal to different levels
The essential role of representation creates
some serious problems for a natural science
 We are not aware of our thoughts …
 What we are usually aware of is what our thoughts are
about, not properties of the representation itself
 Need to distinguish properties of our thoughts and
properties of what they are about (e.g. mental images)
 We are not even aware of deciding, choosing or willing
an action [Wegner, D. M. (2002). The illusion of conscious will.
Cambridge, MA: MIT Press.]
 Introspective evidence is just one type of
evidence and it has turned out to be unreliable
 We are not directly aware of our representations
If that is so, how can we find out
what goes on in our mind…?
 Given these serious problems in
understanding cognition, is it even possible
in principal to find out how the mind works?
 Is there even a fact of the matter about what
process is responsible for certain behaviors?
 Is the only road to understanding cognition
through neuroscience?
 How can we discover the details of our
mental processes and how they work?
Weak vs Strong Equivalence
● Is cognitive science concerned only with developing
models that generate the same Input-Output behavior
as people do?
● A theory that correctly predicts (i.e., mimics) I-O
behavior is said to be weakly equivalent to the
psychological process.
● Everyone in Cognitive Science is interested in strong
equivalence – we want not only to predict the observed
behavior, but also to understand how it is generated.
● The how will usually take the form of an algorithm.
Simulating the Input-Output function
Input
Black Box
Output
Can we do any better than I-O simulation
without looking inside the black box?
 If all you have is observed behavior, how
can you go beyond I-O simulation?
Simulating the Input-Output function
Think about this for a few minutes:

Is there any way to find out HOW a
person does a simple problem such as
adding two 4 digit numbers?

What are possible sources of evidence
that may be relevant to this question?
Modeling the Actual Process
(the algorithm used)
Input
Black Box
Output
Index of
process
If all you have is observed behavior, how can you go
beyond I-O simulation (mimicry)?
 Answer: Not all observations are Inputs or Outputs:
some are meta-behavior or indexes of processes.
Example of the Sternberg
memory search task
● The initial input consists of the instructions and
the presentation of the memory set (n items).
● On each trial the particular input to the black box
consists of the presentation of a target letter.
● The output consists of a binary response (present
or absent). The time taken to respond is also
recorded. That is called the “Reaction Time”.
● The reaction time is not part of the output but is
interpreted as an index of the process (e.g., an
indication of how many steps were performed).
Example of the input-output of a
computational model of the Sternberg task
● Inputs: Memory set is (e.g.) C, D, H, N
● Inputs: Probe (e.g., C or F)
● Output: Pairs of Responses and Reaction Times
(e.g. output is something like “Yes, 460 msecs”)
● Does it matter how the Output is derived?
 It doesn’t if all you care about is predicting behavior
 It does if you care about how it works
 It does if you want your prediction to be robust and
scalable – i.e., to be based on general principles
Example of the input-output of a
computational model of the Sternberg task
• Inputs are: (1) Memory set = C,D,H,N
(2) Target probe = C (or R)
• Input-Output prediction using a table:
Input to model
C
N
R
H
M
Yes
Yes
No
Yes
No
Model prints out
460 ms
530 ms
600 ms
520 ms
620 ms
Is this model weakly- or strongly-equivalent to a person?
Example of a weakly equivalent
model of the Sternberg task
1.
2.
3.
4.
5.
6.
7.
8.
Store memory set as a list L. Call the list size = n
Read target item, call it  (If there is no , then quit)
Check if  is one of the letters in the list L
If found in list, assign =“yes” otherwise  =“no”
(That provides the answer, but what about the time ?)
If  =“yes”, set  = 500 + K * n  Rand(20  x  50)
If  =“no”, set  = 800 + K * n  Rand(20  x  50)
Print , Print 
Go to 2
Is this the way people do it? How do you know?
What reasons do you have for
doubting that people do it this way?
 Because in this case time should not be one of the
computed outputs, but a measure of how many steps
it took.
 The same is true of intermediate states (e.g.,
evidence includes what subjects say, error rates, eye
tracking, judgments about the output, and so on.)
 Reaction time is one of the main sources of
evidence in cog sci.
 Question: Is time always a valid index of processing
complexity?
Results of the Sternberg memory search task
What do they tell us about how people do it? Is this Input-Output
equivalent or is it strongly equivalent to human performance?
Exhaustive search
Self-terminating search
More examples – arithmetic
 How can we tell what algorithm is being used
when children do arithmetic?
 Consider these examples of students doing
addition and subtraction. What can you tell
from these few examples?
32795
21826
+
+
?? 54621 53511 10969 11179 11875
 How else could we try to find out what
method they were using?
Studying human arithmetic
algorithms
• Arithmetic (VanLehn & Brown. “Buggy”)
Buggy – a model of children’s arithmetic – has about
350 “rules” which help uncover “deep bugs”
• Newell & Simon’s study of problem solving
Problem behavior graph and production systems
Use of protocols, eye tracking
• Information-Processing style of theory.
Computational but not always a computer model.
Part 2: Cognitive Architecture
• The slides from here to the end are replaced
by the presentation:
“CognitiveScience2_Architecture.ppt”
• The rest of this presentation is very similar
but there are a few differences.
Representation in perception
• What do we know about the FORM of
perceptual representations?
• What does vision “tell” cognition?
• Does vision depend on cognition, or is it
encapsulated so it cannot use knowledge
• What is the “output” of the visual system?
The Form and Structure of
perceptual representations
●
●
Our subjective impressions (our intuitions) of
what our representations are like are seriously
unreliable and misleading. We do not experience
the form of a representation, only its content –
what it is about or what it represents
But the demands of scientific explanation are
quite different; and they almost always lead us to
unfamiliar and counterintuitive conclusions
This is what our conscious experience
suggests goes on in vision…
This is what the demands of explanation
suggests must be going on in vision…
Consider visual completions
…
Where’s Waldo?
Standard view of saccadic integration
by superposition
Does intentionality (and the trilevel hypothesis) only
apply to high-level processes such as reasoning?
 Examples from vision seeing as: It’s what you see the figure as that
determines behavior – not its physical properties.
 What you see one part as determines what you see another part as.
Can you think of other ways of presenting a
stimulus so it is perceived as e.g., a Necker Cube?
Errors in recall suggest how visual
information is encoded
Children have very good visual memory,
yet often make egregious errors of recall
• Errors in relative orientation often take a
canonical form
• Errors in reproducing a 3D image preserve
3D information
Errors in recall suggest how visual
information is encoded
Children have very good visual memory,
yet often make egregious errors of recall
• Errors in relative orientation often take a
canonical form
• Errors in reproducing a 3D image preserve
3D information
Errors in recall suggest how visual
information is encoded
Children have very good visual memory,
yet often make egregious errors of recall
• Errors in relative orientation often take a
canonical form
• Errors in reproducing a 3D image preserve
3D information
Errors in recall suggest how visual
information is encoded
Children have very good visual memory,
yet often make egregious errors of recall
• Errors in relative orientation often take a
canonical form
• Errors in reproducing a 3D image preserve
3D information
Errors in recall suggest how visual
information is encoded
• Children more often confuse left-right than
rotated forms
• Errors in imitating actions is another source of
evidence
Ability to manipulate and recall patterns depends
on their conceptual, not geometric, complexity
• Difficulty in superimposing shapes depends on
how they are conceptualized
Look at first two shapes and superimpose them in your mind;
then draw (or select one) that is their superposition
Many studies have shown that memory for
shapes is dependent on the conceptual
vocabulary available for encoding them
e.g., recall of chess positions by beginners and masters
Other examples showing that it is how you represent
something that is relevant to cognitive science
 Examples from color vision
“Red light and yellow light mix to produce orange light”
This remains true for any way of getting red light and
yellow light:
 e.g. yellow may be light of 580 nanometer wavelength,
or it may be a mixture of light of 530 nm and 650 nm
wavelengths…
So long as one light looks yellow and the other looks red
the “law” will hold.
Two other considerations that are special
to cognitively determined behavior
1. The Cognitive Penetrability of most cognitive
processes. A regularity that is based on representations
(knowledge) can be systematically altered by imparting
new information that changes beliefs.
2. The critical role of "Cognitive Capacity". Because of
an organism's ecological or social niche, only a small
fraction of its behavioral repertoire is ever actually
observed. Nonetheless an adequate cognitive theory
must account for the behavioral repertoire that is
compatible with the organism's structure, which we call
its cognitive capacity.
Strong Equivalence and the
role of cognitive architecture
The concept of cognitive architecture
 If differences among behaviors (including differences
among individuals) is to be attributed to different
beliefs or different algorithms, then there must be some
common set of basic operations and mechanisms. This
is called the Cognitive Architecture
• The concept of a particular algorithm, or of being “the same
algorithm” is only meaningful if two computers have the
same architecture. Algorithm is architecture-relative.
 The architecture is the part of the system that does not
change when beliefs change. So it defines the system’s
Cognitive Capacity.
Example of model of the Sternberg
task discussed earlier
1.
2.
3.
4.
5.
6.
7.
8.
Store
Call the
Store memory
memory set
set as a list L. Call
the list
list size
size = n
Read target item, call it  (If there is no , then quit)
Check if  is one of the letters in the list L
If found in list, assign =“yes” otherwise  =“no”
(That provides the answer, but what about the time ?)
If  =“yes”, set  = 500 + K * n  Rand(20  x  50)
If  =“no”, set  = 800 + K * n  Rand(20  x  50)
Print , Print 
Go to 2
Is this the way people do it? How do you know?
Example of a weakly equivalent
model of the Sternberg task
1.
2.
3.
4.
Store memory set as a list L. Call the list size = n
Read target item, call it  (If there is no , then quit)
Check if  is one of the letters in the list L
If found in list, then assign =“yes” else  =“no”
5. If  =“yes”, then set  = 500 + K set * n  Rand(20  x  50)
6. If  =“no”, then set  = 800 + K * n  Rand(20  x  50)
7. Print , Print 
8. Go to 2
Is this the way people do it? How do you know?
Tacit assumptions made in
constructing a computational model
But there are many other properties of algorithms that
constitute assumptions about the cognitive
architecture. One class of properties seems so natural
that it goes unquestioned – it’s the control structure
● Operations are carried out in sequence. No operation can
begin until the previous one is completed. This seems so
natural that it goes unnoticed as an assumption.
● Another fundamental property that is assumed is that control
is passed from one operation to another (e.g., “go to”), as
opposed to being grabbed in a “recognize-act” cycle
More about the computational model
and the tacit assumptions it makes
On the difference between explanations
that appeal to mental architecture and
those that appeal to tacit knowledge
Suppose we observe some robust
behavioral regularity. What does it tell
us about the nature of the mind or
about its intrinsic properties?
An illustrative example: Mystery Code Box
What does this behavior pattern tell us about the nature of the box?
An illustrative example: Mystery Code Box
Careful study reveals that pattern #2 only occurs in
this special context when it is preceded by pattern A
What does this behavior pattern tell us about the nature of the box?
The Moral:
Regularities in behavior
may be due to either:
1. The inherent nature of the system or
its structure or architecture.
2. The content of what the system
represents (what it “knows”).
Why it matters:
A great many regular patterns of behavior
reveal nothing more about human nature
than that people do what follows rationally
from what they believe.
An example from language understanding
The example of human conditioning
An example from language understanding
Examples from language.
John gave the book to Fred because he finished it
John gave the book to Fred because he wanted it
● The
city council refused to give the workers a permit for
a demonstration because they feared violence
● The city council refused to give the workers a permit for
a demonstration because they were communists
Another example where it matters:
The study of mental imagery
Application of the architecture vs knowledge
distinction to understanding what goes on when
we reason using mental images
Examples of behavior regularities
attributable to tacit knowledge
• Color mixing, conservation of volume
• The effect of image size ?
• Scanning mental images ?
Color mixing example
Conservation of volume example
Our studies of mental scanning
2
1.8
1.6
scan image
1.4
imagine lights
Latency (secs)
show direction
1.2
1
0.8
0.6
0.4
0.2
0
1
2
3
4
Relative distance on image
(Pylyshyn & Bannon. See Pylyshyn, 1981)
There is even reason to doubt that one can imagine scanning continuously (Pylyshyn & Cohen, 1998)
Can you rotate a mental image?
Which pair of 3D objects is the same except for orientation?
Do mental images have size?
Imagine a very small mouse. Can you see its whiskers?
Now imagine a huge mouse. Can you see its whiskers?
Which is faster?
Why do so many people deny these
obvious facts about mental imagery?
 The power of subjective experience (phenomenology).
The mind-body problem is everywhere: but subjective
experience does not cause behavior! (e.g., conscious will)
 The failure to make some essential distinctions
 Content vs form (the property of images vs the property of
what images are about) {compare the code box example}
 An image of X with property P can mean
1) (An image of X) with property P or
2) An image of (X with property P)
 Capacity vs typical behavior: Architecture vs knowledge
Are all the things we thought
were due to internal pictures
actually due to tacit knowledge?
Other reasons for imagery phenomena:
• Task demands: Imagine that X = What
would it be like if you saw X?
Are there pictures in the brain?
• There is no evidence for cortical
displays of the right kind to explain
visual or imaginal phenomena
So what is in the brain?
• The best hypothesis so far (i.e., the only one
that has not been shown to be clearly on the
wrong track) is that the brain is a species of
computer in which representations of the
world are encoded in the form of symbol
structures, and actions are determined by
calculations (i.e., inferences) based on these
symbolic encodings.
So why does it not feel like we are
doing computations?
 Because the content of our conscious experience is a
very poor guide to what is actually going on that causes
our experiences and our behavior. Science is
concerned with causes, not just correlations.
 Because we can’t assume that the way things seem has
much to do with how it works (e.g., language
understanding)
 As in most sciences, the essential causes are far from obvious
(e.g., why does the earth go around the sun? What is this
table made of ? etc.).
 In the case of cognition, what is going on is a delicate
mixture of the obvious (what Granny or Shakespeare knew
about why people do what they do) and the incredible
We can’t even be sure that we have
the right methods or instruments