SPROG OG LATERALISERING I HJERNEN

Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
SPROG OG LATERALISERING
I HJERNEN
Andreas Højlund Nielsen
Folkeuniversitetet, Odense, 12. maj 2015
Interacting Minds Centre
Aarhus University
au
[email protected]
AARHUS UNIVERSITET
Dept. of Linguistics
Aarhus University
HJERNEFORSKER
- ER DU SÅ LÆGE?
Født 1983
Bachelor i lingvistik 2008
Cand.mag. i kognitiv semiotik 2011
(Næsten) ph.d. i
kognitiv neurovidenskab (27. maj) 2015
Postdoc ved Aarhus Universitet
(Parkinson, DBS, sprog) 2015Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
AFTENENS PROGRAM
Andreas snakker (ca. 45 min)
Pause (ca. 15 min)
Andreas snakker (ca. 35 min)
Spørgsmål (5-10 min)
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
HJERNENS GÅDE #1
Hvor sidder sproget i hjernen?
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
AFASI
NÅR SPROGET I HJERNEN GÅR I STYKKER
Fra græsk: ‘uden tale’ (a- = ‘uden’, phasis = ‘tale’, aphatos = ‘målløs’)
dvs. ‘nedsat evne til at bruge (eller forstå) sproget’
Oftest pga.
stroke (blodprop eller blødning i hjernen)
hjernesvulst
traume (slag mod hovedet)
visse typer demens
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
PAUL BROCA (1824-1880)
Patient Leborgne
- også kendt som “Tan”
Brocas post-mortemundersøgelser af “Tan” førte
til beskrivelsen af ‘Brocas
afasi’
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
EKSEMPEL: BROCAS AFASI
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
AFASI EFTER STROKE
(BLODPROP ELLER BLØDNING I HJERNEN)
Aphasia in Acute Stroke: Incidence,
Mild
Moderate
Severe
and
Recovery
Determinants,
Aphasia
p Value
Table I . Basic Patient Characteristics in Reiation to Aphasia Severity
No
N (incidence)
Age (yr) (SD)
Sex, male (9%)
Aphasia
Aphasia
Aphasia
551 Pedersen,
(62.5%) MA,"101
(11.5%)
56 MD,"
(6.4%)
(19.6%) MD,"
Palle Mgjller
Henrik
Stig Jgjrgensen,
Hirofumi173
Nakayama,
73.1Hans
( 1 1 . 5Otto
)
76.5 (9.5)
(9.5)
77.1
(9.4)
<0.0001
Raaschou,
MD,T and Tom75.8
Skyhgj
Olsen, MD,
PhDX
48%
29%
45%
0.04
48%
Handedness, right (%)
93%
92%
98%
94%
NS
Knowledge
of
the
frequency
and
remission
of
aphasia
is
essential
for
the
rehabilitation
of
stroke
patients
and proSide of stroke lesion, left (96)
37%
93%
89%
87 %
<0.00001
vides insight into the brain organization of language. We studied prospectively and consecutively an unselected and
Mortality ($%) community-based sample
18%
<0.00001
10%of 881 patients 10%
with acute stroke. Assessment
of aphasia47%
was done at admission,
weekly
Prior stroke (%)
26% score of the Scandinavian
36%
0.0004
20%
during the hospital stay,
and at a 6-month26%
follow-up using the aphasia
Stroke Scale.
ThirtyComorbidity (%)
21% at the time of14%
25%18% had aphasia.
27%
NS
eight percent had aphasia
admission; at discharge
Sex was not a determinant
of
aphasia
no (12.1)
sex difference 41.8
in the(9.7)
anterior-posterior
of lesions
was
found.
The
remission
SSS o n admission
(SD)in stroke, and43.9
33.5distribution
(11.6)
<0.0001
15.5 (11.2)
95% was reached
those
with initial mild
aphasia,
curve was(SD)
steep: Stationary
language function
S S S excluding language
29.2 (10.4)
31.2 in
(9.5)
28.0within
(10.6)2 weeks in
15.0
(10.7)
<0.0001
within 6 weeks in those with moderate, and within 10 weeks in those with severe aphasia. A valid prognosis of aphasia
BI on admission
(SD)
61.6 (38.9)
63.6 (37.0)
(38.4)
16.1of(30.3)
<0.0001
could be made within 1 to 4 weeks after the stroke depending44.3
on the
initial severity
aphasia. Initial
severity of
aphasia
wasScale;
the only
clinicallylanguage
relevant=predictor
of aphasia
outcome.
Sex, and
handedness,
side BI
of stroke
lesion
were
SSS = Scandinavian
Stroke
SSS excludirig
SSS on admission
excluding
aphasia
orientationand
scores;
= Barthei
index;
not
independent
outcome
predictors,
and
the
influence
of
age
was
minimal.
SD = standard deviation; NS = not significant.
Pedersen PM, J@rgensenHS, Nakayama H, Raaschou HO, Olsen TS. Aphasia in acute stroke:
incidence, determinants, and recovery. Ann Neurol 1995;38:659-666
Center of Functionally
Integrative
Neuroscience
Aphasia
is a common
symptom in stroke. It is considAarhus University/ Aarhus University Hospital
ered a major disability by sufferers and relatives, and
knowledge of prognosis and the time course of remission is important for the planning of rehabilitation and
for informing patient and family. The remission of
aphasia is known to take place mainly within the first
Interacting Minds Centre
Aarhus University
the stroke. Eighty-eight percent
of all stroke patients are [email protected]
pitalized in the Copenhagen
area
All patients were transDept. of[8].
Linguistics
Aarhus
ferred on acute admission
toUniversity
the same 60-bed stroke unit,
where all stages of acute care, work-up, and rehabilitation
AARHUS
rook place.
au
INCLUSION AND EXCLUSION CRITERIA.
A total of 1,014 pa-
UNIVE
ENDNU ET EKSEMPEL:
BROCAS AFASI
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
A TALE
OF
I
B Y D AV I D W O L M A N
SPLIT-BRAIN-PATIENTER
n the first months after her surgery, shopping for groceries was
infuriating. Standing in the supermarket aisle, Vicki would look
at an item on the shelf and know that she wanted to place it in her
trolley — but she couldn’t. “I’d reach with my right for the thing
I wanted, but the left would come in and they’d kind of fight,” she says.
“Almost like repelling magnets.” Picking out food for the week was a
two-, sometimes three-hour ordeal. Getting dressed posed a similar
challenge: Vicki couldn’t reconcile what she wanted to put on with
what her hands were doing. Sometimes she ended up wearing three
outfits at once. “I’d have to dump all the clothes on the bed, catch my
breath and start again.”
In one crucial way, however, Vicki was better than her pre-surgery
self. She was no longer racked by epileptic seizures that were so severe
they had made her life close to unbearable. She once collapsed onto the
bar of an old-fashioned oven, burning and scarring her back. “I really
just couldn’t function,” she says. When, in 1978, her neurologist told
Since
1960s,
researchers
been scrutinizing
handful
herthe
about
a radical
but have
dangerous
surgery athat
might help, she barely
of patients who underwent a radical kind of brain surgery.
hesitated.
If
the
worst
were
to
happen,
she
knew
that
her parents would
The cohort has been a boon to neuroscience — but soon it will be gone.
take care of her young daughter. “But of course I worried,” she says.
“When you get your brain split, it doesn’t grow back together.”
In June 1979, in a procedure that lasted nearly 10 hours, doctors created a firebreak to contain Vicki’s seizures by slicing through her corpus callosum, the bundle of neuronal fibres connecting the two sides of
her brain. This drastic procedure, called a corpus callosotomy, disconnects the two sides of the neocortex, the home of language, conscious
B Y D AV I D W O L M A N
thought and movement
control. Vicki’s supermarket predicament was
B Y D AV I D W O L M A N
e the 1960s, researchers have been scrutinizing a handful
patients who underwent a radical kind of brain surgery.
rt has been a boon to neuroscience — but soon it will be gone.
A TALE OF
I
6 0her surgery,
| N Ashopping
T U RforEgroceries
| VO
n the first months2after
wasL
infuriating. Standing in the supermarket aisle, Vicki would look
at an item on the shelf and know that she wanted to place it in her
trolley — but she couldn’t. “I’d reach with my right for the thing
I wanted, but the left would come in and they’d kind of fight,” she says.
“Almost like repelling magnets.” Picking out food for the week was a
two-, sometimes three-hour ordeal. Getting dressed posed a similar
challenge: Vicki couldn’t reconcile what she wanted to put on with
what her hands were doing. Sometimes she ended up wearing three
outfits at once. “I’d have to dump all the clothes on the bed, catch my
breath and start again.”
In one crucial way, however, Vicki was better than her pre-surgery
self. She was no longer racked by epileptic seizures that were so severe
they had made her life close to unbearable. She once collapsed onto the
bar of an old-fashioned oven, burning and scarring her back. “I really
just couldn’t function,” she says. When, in 1978, her neurologist told
her about a radical but dangerous surgery that might help, she barely
hesitated. If the worst were to happen, she knew that her parents would
take care of her young daughter. “But of course I worried,” she says.
“When you get your brain split, it doesn’t grow back together.”
Aarhus
University/ Aarhus University Hospital
In June 1979, in a procedure that lasted nearly 10 hours, doctors created a firebreak to contain Vicki’s seizures by slicing through her corpus callosum, the bundle of neuronal fibres connecting the two sides of
her brain. This drastic procedure, called a corpus callosotomy, disconnects the two sides of the neocortex, the home of language, conscious
thought and movement control. Vicki’s supermarket predicament was
4 the
83
| 1 5 ofM
A Rthat
Cbehaved
H 2 in
0 some
1 2 ways as if it were two
consequence
a brain
separate minds.
© 2012 Macmillan
After about a year, Vicki’s difficulties abated. “I could get things
together,” she says. For the most part she was herself: slicing vegetables,
tying her shoe laces, playing cards, even waterskiing.
But what Vicki could never have known was that her surgery would
turn her into an accidental superstar of neuroscience. She is one of
fewer than a dozen ‘split-brain’ patients, whose brains and behaviours
have been subject to countless hours of experiments, hundreds of scientific papers, and references in just about every psychology textbook
of the past generation. And now their numbers are dwindling.
Through studies of this group, neuroscientists now know that the
healthy brain can look like two markedly different machines, cabled
together and exchanging a torrent of data. But when the primary cable
is severed, information — a word, an object, a picture — presented to
one hemisphere goes unnoticed in the other. Michael Gazzaniga, a
cognitive neuroscientist at the University of California, Santa Barbara,
and the godfather of modern split-brain science, says that even after
working with these patients for five decades, he still finds it thrilling
to observe the disconnection effects first-hand. “You see a split-brain
patient just doing a standard thing — you show
NATURE.COM
him an image and he can’t say what it is. But he
To hear more about
can pull that same object out of a grab-bag,” Gazsplit-brain patients, zaniga says. “Your heart just races!”
visit:
Work with the patients has teased out
go.nature.com/knhmxk
differences between the two hemispheres,
the consequence of a brain that behaved in some ways as if it were two
separate minds.
After about a year, Vicki’s difficulties abated. “I could get things
together,” she says. For the most part she was herself: slicing vegetables,
tying her shoe laces, playing cards, even waterskiing.
But what Vicki could never have known was that her surgery would
turn her into an accidental superstar of neuroscience. She is one of
fewer than a dozen ‘split-brain’ patients, whose brains and behaviours
have been subject to countless hours of experiments, hundreds of scientific papers, and references in just about every psychology textbook
of the past generation. And now their numbers are dwindling.
Through studies of this group, neuroscientists now know that the
healthy brain can look like two markedly different machines, cabled
together and exchanging a torrent of data. But when the primary cable
is severed, information — a word, an object, a picture — presented to
one hemisphere goes unnoticed in the other. Michael Gazzaniga, a
cognitive neuroscientist at the University of California, Santa Barbara,
and the godfather of modern split-brain science, says that even after
working with these patients for five decades, he still finds it thrilling
to observe the disconnection effects first-hand. “You see a split-brain
patient just doing a standard thing — you show
NATURE.COM
him an image and he can’t say what it is. But he
To hear more about
can pull that same object out of a grab-bag,” Gazsplit-brain patients, zaniga says. “Your heart just races!”
visit:
Work with the patients has teased out
go.nature.com/knhmxk differences between the two hemispheres,
Publishers Limited. All rights reserved
her surgery, shopping for groceries was
the consequence of a brain that behaved in some ways as if it were two
the supermarket aisle, Vicki would look
separate minds.
nd know that she wanted to place it in her
After about a year, Vicki’s difficulties abated. “I could get things
n’t. “I’d reach with my right for the thing
together,” she says. For the most part she was herself: slicing vegetables,
ome in and they’d kind of fight,” she says.
tying her shoe laces, playing cards, even waterskiing.
of was
Functionally
Neuroscience
ets.” Picking out foodCenter
for the week
a
ButIntegrative
what Vicki could
never have known was that her surgery would
ordeal. Getting dressed posed a similar
turn her into an accidental superstar of neuroscience. She is one of
oncile what she wanted to put on with
fewer than a dozen ‘split-brain’ patients, whose brains and behaviours
Sometimes she ended up wearing three
have been subject to countless hours of experiments, hundreds of sciump all the clothes on the bed, catch my
entific papers, and references in just about every psychology textbook
of the past generation. And now their numbers are dwindling.
er, Vicki was better than her pre-surgery
Through studies of this group, neuroscientists now know that the
d by epileptic seizures that were so severe
healthy brain can look like two markedly different machines, cabled
2 6 0 | NAT U R E | VO L 4 8 3 | 1 5 M A RC H 2 0 1 2
© 2012 Macmillan Publishers Limited. All rights reserved
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
SPLIT-BRAIN-PATIENTER
Split-brain patients have undergone surgery to cut
the corpus callosum, the main bundle of neuronal
fibres connecting the two sides of the brain.
Input from
the left field
of view is
processed
by the right
hemisphere
and vice
versa.
Left
hemisphere
A word is flashed briefly to the
right field of view, and the patient
is asked what he saw.
Visual
fields
Corpus
callosum
Right
hemisphere
Nothing
Face
Because the left hemisphere is dominant
for verbal processing, the patient’s
answer matches the word.
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Now a word is flashed to the left
field of view, and the patient is
asked what he saw.
The right hemisphere cannot share information
with the left, so the patient is unable to say
what he saw, but he can draw it.
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
SPLIT-BRAIN-PATIENTER
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
HJERNENS GÅDE #1
Hvor sidder sproget i hjernen?
SVAR: Sproget sidder primært i venstre side af hjernen
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
HJERNENS GÅDE #2
Hvorfor sidder sproget til venstre?
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
groups: mixed handers had a lower prevalence of moderate
right-lateralization (4.3%) than moderate right and left handers
(6.0% and 6.9% respectively). Moderate left-handers had a lower
prevalence of bilateral lateralization (17.2%) than mixed and strong
left-handers (26.1% and 29.8% respectively). See Fig. 1 and Table 2
for an overview of the frequency distribution. Frequencies peaked
in the strong-left-handedness subgroup for all three measures. The
overall frequency distribution of atypical lateralization (bilateral,
moderate right- and strong right-hemispheric lateralization collapsed) also peaked in the strong left-handedness subgroup
HÅNDETHED OG SPROG
Brain & Language 144 (2015) 10–15
Contents lists available at ScienceDirect
Brain & Language
journal homepage: www.elsevier.com/locate/b&l
Short Communication
On the relationship between degree of hand-preference
and degree of language lateralization
Metten Somers a,⇑, Maartje F. Aukes a, Roel A. Ophoff a,b,c, Marco P. Boks a, Willemien Fleer a,
Kees (C.) L. de Visser d, René S. Kahn a, Iris E. Sommer a
a
Brain Center Rudolf Magnus, Department of Psychiatry, University Medical Center Utrecht, Utrecht, The Netherlands
Department of Human Genetics, David Geffen School of Medicine at UCLA, University of California, Los Angeles, CA, USA
Center for Neurobehavioral Genetics, Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, CA, USA
d
Department of General Practice, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
b
c
i n f o
1. Introduction
Center of Functionally Integrative
Neuroscience
inspires
our understanding of the genetic underpinnings of both
Aarhus University/ Aarhus University Hospital
Language lateralization and hand-preference can be described
in terms of direction (right or left) as well as degree (strongly
lateralized or more bilaterally represented (Isaacs, Barr, Nelson, &
Devinsky, 2006)). It has been hypothesized that degree of handpreference mirrors degree of language lateralization, e.g. that
mixed-handers have the highest prevalence of bilateral lateralization and that strong left/right-handers have the highest prevalence
of strong language lateralization (Annett, 1999; Crow, Crow, Done,
traits.
In this study, we investigated the relation between degree of
language lateralization and degree of hand-preference and tested
whether hand-preference can be used as a predictor for atypical
language lateralization. We enriched the data for atypically lateralized subjects, by including large families with multiple left-handers. Hand-preference was measured with the Edinburgh
Handedness Inventory, language lateralization with functional
Transcranial Doppler (fTCD) in a fairly large sample.
nd
hå
re
Keywords:
Hand-preference
Left-handedness
Language lateralization
Functional transcranial Doppler
Asymmetry
Language lateralization and hand-preference show inter-individual variation in the degree of lateralization to the left- or right, but their relation is not fully understood. Disentangling this relation could aid
elucidating the mechanisms underlying these traits. The relation between degree of language lateralization and degree of hand-preference was investigated in extended pedigrees with multi-generational lefthandedness (n = 310). Language lateralization was measured with functional Transcranial Doppler, handpreference with the Edinburgh Handedness Inventory. Degree of hand-preference did not mirror degree
of language lateralization. Instead, the prevalence of right-hemispheric and bilateral language lateralization rises with increasing strength of left-handedness. Degree of hand-preference does not predict degree
of language lateralization, thus refuting genetic models in which one mechanism defines both hand-preference and language lateralization. Instead, our findings suggest a model in which increasing strength of
left-handedness is associated with increased variation in directionality of cerebral dominance.
! 2015 Elsevier Inc. All rights reserved.
nst
Ve
Article history:
Recieved 5 September 2014
Accepted 17 March 2015
Available online 13 April 2015
a b s t r a c t
Hø
jre
hå
nd
a r t i c l e
Fig. 1. Degree of language lateralization vs. degree of hand-preference. The
proportion of each of 5-language lateralization categories (strong left-hemispheric,
Interacting Minds Centre
moderate left-hemispheric, Aarhus
bilateral,
moderate right-hemispheric and strong rightUniversity
hemispheric) is plotted against 5 categories of hand-preference (strong rightDept. of Linguistics
handedness, moderate right-handedness,
mixed-handedness, moderate left-handAarhus University
edness, strong left-handedness).
[email protected]
au
AARHUS UNIVE
HÅNDETHED OG SPROG
Downloaded from http://rstb.royalsocietypublishing.org/ on May 12, 2015
Phil. Trans. R. Soc. B (2009) 364, 881–894
doi:10.1098/rstb.2008.0235
Published online 5 December 2008
Downloaded from http://rstb.royalsocietypublishing.org/ on May 12, 2015
Review
Why are some people left-handed?
An
Phil. Trans. R. Soc. B (2009) 364, 881–894
evolutionary perspective doi:10.1098/rstb.2008.0235
Downloaded from http://rstb.royalsocietypublishing.org/ on May 12, 2015
Published online 5 December 2008
V. Llaurens1,*, M. Raymond1 and C. Faurie1,2
Phil. Trans. R. Soc. B (2009) 364, 881–894
doi:10.1098/rstb.2008.0235
Published online 5 December 2008
1
Institut des Sciences de l’Evolution de Montpellier (UMR CNRS 5554 ), Université de Montpellier II,
C.C. 065, 34095 Montpellier Cedex 5, France
2
Department of Animal and PlantReview
Sciences, University of Sheffield, Sheffield S10 2TN, UK
Since prehistoric times, left-handed individuals have been ubiquitous in human populations,
exhibiting geographical frequency variations. Evolutionary explanations have been proposed for
Why
are some people left-handed? An
the persistence of the handedness polymorphism. Left-handedness could be favoured by negative
frequency-dependent selection. Data have suggested that left-handedness, as the rare hand
Interacting Minds Centre
evolutionary
perspective
preference,
could represent an important strategic
advantage in fighting interactions. However, the
Aarhus University
fact that left-handedness occurs at a low frequency indicates
that some evolutionary costs could
1,*
1
[email protected]
be V.
associated
with
left-handedness.
Overall, the evolutionary
dynamics
of this1,2
polymorphism are
Llaurens
and C.
Faurie
, M. Raymond
Review
Center of Functionally Integrative Neuroscience
University/ Aarhus University Hospital
WhyAarhus
are
some people left-handed? An
evolutionary perspective
V. Llaurens1,*, M. Raymond1 and C. Faurie1,2
1
1
Dept. of Linguistics
not fully understood. Here, we review the abundant literature available regarding the possible
mechanisms
and consequences
of left-handedness.
We point
outUniversity
that
hand preference
is heritable,II,
Institut des Sciences
de l’Evolution
de Montpellier
(UMR CNRS
5554
), Université
de Montpellier
Aarhus
and report how C.C.
hand preference
is influenced
by Cedex
genetic,5,
hormonal,
065, 34095
Montpellier
France developmental and cultural
factors. We review the available information on potential fitness costs and benefits acting as selective
2
Department
of Animal and Plant Sciences, University of Sheffield, Sheffield S10 2TN, UK
forces on the proportion of left-handers. Thus, evolutionary perspectives on the persistence of this
polymorphism
in humans are gathered
for thehave
first time,
highlighting
the necessity
for an assessment
prehistoric
times, left-handed
individuals
been
ubiquitous
in human
populations,
Institut des Sciences de l’Evolution de Montpellier (UMR CNRS 5554 ), Université de Montpellier Since
II,
of fitness differences between right- and left-handers.
exhibiting
geographical
frequency
variations.
Evolutionary
explanations
have
been
proposed for
C.C. 065, 34095 Montpellier Cedex 5, France
handedness;
polymorphism; could
human be favoured by negative
2
the persistence of the handedness Keywords:
polymorphism.
Left-handedness
au
AARHUS UNIVE
HJERNENS GÅDE #2
Hvorfor sidder sproget til venstre?
HÅNDETHED?
Lateraliseringen er mangfoldig og kompliceret
Ingen entydig eller samlet genetisk forklaring
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
PAUSE (15 MIN)
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
EEG
(ELEKTROENCEFALOGRAFI)
rå EEG-signaler
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
ERP
(EVENT-RELATEREDE POTENTIALER)
Råt EEG signal
bib
bib
bib
bib
Gennemsnitligt
ERP signal
Tid (ms)
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
MISMATCH NEGATIVITY (MMN)
[...sssssssssssdssssdsssssdssssdssssssdsss...]
Oddball-paradigme
s = standard-tone
d = afvigende tone
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
MISMATCH NEGATIVITY (MMN)
[...sssssssssssdssssdsssssdssssdssssssdsss...]
Näätänen et al., Nature, 1997
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
“LOKALE” DATA
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
MMN & LÆRING
Forskel på finnere og estere
Kendt fonem
mere venstre-lateraliseret
Näätänen et al., Nature, 1997
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
MMN & LÆRING
Finske og estiske børn
Cheour, et al. Nature Neurosci, 1998
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
sing, whereas
in three subjects 31
the right hemisphere
showed dominance.Columns
the MMNm ECD
JA
Left
9 Oy,
49 in
39 5^ 8 indicate 46
16strengths in the
10left and right hemispheres
30
Vectorview
magnetometer
(4D
Finland)
lyses
were performed
on processing
the basis
of this
division,
i.e.,
in #6 between
for
the coded
stimuli
before and24
afterNeuromag
the training
dominance
for Morse
calculated
as the
di¡erence
TK
Left
17 course.The hemispheric
11
33
7 code
13 was
#22
an MMNm
acoustically
and of
magnetically
shielded
room (Euroshield
each
individual, the
parameters
the
strengths
the speech-MMNm
dominant
and non-dominant
hemispheres.
EK
Left
51
44
27
56
29 P1m and MMNm
33
#29 were #4
Ltd, Finland). The MEG epochs for the standard and deviant
compared
between
the
speech-MMNm
dominant
PL
Left
30
21
13
25
14
9
#12and -on- 5
stimuli were averaged online except for those standard
dominant
hemispheres.
PH
Right
21
24
46
57
30
49
11
19
stimuli that immediately followed deviant stimuli. Epochs
The17effect of Morse
on the 4magnitude, 21
IN
Right
26
39
13
15 code acquisition
36
with electro-oculogram (EOG) or MEG signals 4 150 mV or
loci, and
was tested 15
TV
Right
13
21
48
11 latency of3 the Morse code
18 processing#37
Spoken
Morse-coded
3000 fT/cm,ofrespectively,
were also
excluded.
repetition
the stimulation
after
3 months. When addiusing repeated-measures ANOVAs
with factors session
(1st,
syllables
syllables
2nd),
component
(P1m,
MMNm),
and
hemisphere
(speechThe
hemispheric
dominance
for
speech
was
determined
by
comparing
the
left
and
right
hemisphere
average
of
the
MMNm
ECD
strengths
(nAm) recorded
tional analyses were required, ANOVAs with factors session
P1m
in
the
two
sessions
for
the
spoken-syllable
contrast
(left
column).
In
four
subjects
the
left
hemisphere
dominated
the
native-language
speech-sound
procesMMNm
dominant,
speech-MMNm
non-dominant).
The
Source
modeling:
The
stimulus
contrasts
in
both
spoken
and hemisphere were performed for the components
sing,
whereas
in
three
subjects
the
right
hemisphere
showed
dominance.Columns
5^
8
indicate
the
MMNm
ECD
strengths
in
the
left
and
right
hemispheres
P1m
and
MMNm
to
the
speech
stimuli
were
also
compared
and
Morse
coded
stimuli
elicited
typical
MMNm
responses.
20
20
separately.
for
the coded waves
before
and after
training minus
course.The
hemispheric
dominance
Morse
code in
processing
was
calculated
as the
di¡erence between
between
thefor
two
sessions
ANOVAs
with
the same
factors
Difference
(responses
to the
deviants
to
To
help stimuli
to detect
possible
changes
in the those
hemispheric
the
MMNm strengths
of the
speech-MMNm
dominantdipole
and non-dominant
tohemispheres.
obtain possible effects of, for example, training or
standards)
were used
in the
equivalent current
(ECD)
Dipole moment [nAm]
MMN & LÆRING
SPROG OG MORSE-KODE
dominance for the Morse stimuli, we calculated the
difference in the MMNm magnitude for the Morse coded
10
10
COGNITIVE NEUROSCIENCE AND NEUROPSYCHOLOGY syllables between the speech-MMNm
NEUROREPORT
dominant and nonNEUROREPORT
16 8 4
Vol 14 No 13 15 September 2003
dominant hemispheres (i.e. the strength of the MMNm in
Spoken
Morse-coded
repetition
of the by
stimulation
after
months. When
addiPlastic cortical changes
induced
learning
to 3hemisphere
the speech-MMNm
non-dominant
subtracted
1st
2nd
1st
2nd
Copyright
©
Lippincott
Williams
&
Wilkins.
Unauthorized
reproduction
of
this
article
is
prohibited.
syllables
syllables
tional
analyses
were
required,
ANOVAs
with
factors
session
Session
Session
from
that
of
the
speech-MMNm
dominant
hemisphere)
for
communicate with non-speech sounds NEUROREPORT
A. KUJALA ETAL.
P1m
and
hemisphere(Table
were1).performed for the components
each individual
MMNm
20
20
1
separately.
Anu Kujala,1,2,CA Minna Huotilainen,
Maria Uther,1,3 Yury Shtyrov,1,4 Simo Monto,1,2,5
Prior to the training, the mean Morse-MMNm was
Spoken syllables
2,5
1,2,5
40lateralized to the hemisphere40 non-dominant for the
Risto J. Ilmoniemi
Ristoto
N!!t!nen
To and
help
detect possible changes in the hemispheric
speech-MMNm. After the training, however, the MorseRESULTS for the Morse stimuli, we calculated the
dominance
Cognitive
Brain Research Unit, Department of Psychology and Helsinki Brain Research Center, FIN- 00014 University of Helsinki, Finland; Department of
MMNm became stronger in the speech-MMNm dominant
1,2,5
Psychology,University of Portsmouth, King Henry Building, King
Henry1St,effect
Portsmouth in
PO12DY;
Medical
Research Council,Cognition
and Brain Sciences for the
Monto,
The
ofthe
training
on
speech-sound
processing:
No
difference
MMNm
magnitude
Morse coded
10than non-dominant hemisphere
10 (Figs. 2 and 3). The
Sprog
Unit, CB2 2EF Cambridge, UK; BioMag Laboratory, Engineering Centre, Helsinki University Central Hospital, FIN00029 HUS, Finland
the speech-MMNm
statistically
significant
changes occurred
in theand
speechsyllables
between
the speech-MMNm
dominant
non30MMNm-magnitude difference between
30
Corresponding Author: anu.kujala@helsinki.¢
dominant and non-dominant hemispheres before and after
MMNm
or15 May
speech-P1m
magnitude,
loci, or
between
dominant
(i.e.
the strength
of latency
the MMNm
in
31 [nAm]
22
Received
15 April 2003; acceptedhemispheres
2003
the training highlights the change in the hemispheric
the first
and second sessions.
The MMNm
wasMorse-coded
stronger
than
balance1stin each individual
(Table 1):1st
on average,2nd
the
speech-MMNm
non-dominant
hemisphere
subtracted
syllables
nki, Finland; 3Department of
2nd
DOI:the
10.1097/01.wnr.0000089132.26479.a5
difference Session
before the training was#11 nAm
and after
Before learning
Cognition and Brain Sciences
Session
the P1m
(main effect ofdominant
component
F(1,6)
¼ 17.13,
from
thatresponse
of the speech-MMNm
hemisphere)
for
00029With
HUS,Morse
Finland
20+ 11 nAm, indicating a complete
20 reversal of hemispheric
code, an acoustic message is transmitted
using
speech sounds. After a training period of 3 months, the pattern
po
0.01).
lateralization for the MMNm to the Morse stimuli. Thus,
each
1).code MMN became lateracombinations of tone patterns rather than the spectrally
and individual
reversed, however:(Table
the mean Morse
1
2
Dipole moment [nAm]
ning to
nds
3
4
5
CA
Morse
før Morse code processing: The
The effect of training on
temporally complex speech sounds that constitute the spoken
language. Using MEG recordings of the mismatch negativity
(MMN, an index of permanent auditory cortical representations
of native language speech sounds), we probed the dominant
hemisphere for the developing Morse code representations
in adult Morse code learners. Initially, the MMN to the Morse
coded syllables was, on average, stronger in the hemisphere
opposite to the one dominant for the MMN to native language
lized to the hemisphere that was predominant for the speechsound MMN. This suggests that memory traces for the Morse
coded acoustic language units develop within the hemisphere that
already accommodates the permanent traces for natural speech
sounds.These plastic changes manifest, presumably, the close associations formed between the neural representations of 26
the
c 2003
tone patterns and phonemes. NeuroReport 14:1683^1687 !
Lippincott Williams & Wilkins.
RESULTS
Morse-MMNm and Morse-P1m responses were stronger in
37
MMNm was relatively stronger
after the training, the Morse-MMNm
the speech-MMNm dominant hemisphere than prior to
40in
40
the training even though only a small increase in the Morse1st
2nd
1st
2nd
MMNm magnitude occurred in this hemisphere.
Session
Session
With the training, the overall level of the neural activation
related to Morse codeSpeech-MMNm
processing considerably decreased;
this
was
mainly
due
to
a
decrease
30
30in the MMNm magnitude
Dominant hemisphere. The sum
in the speech-MMNm non-dominant
After learning No
The
effect
of in
training
on speech-sound
the first
than
the second
session (main processing:
effect
of session
statistically
significant
changes
occurred
in
the
speechKey words: Learning; Magnetoencephalography (MEG); Mismatch
negativity
MMNm;
MMF; 0.05),
Mismatch ¢eld);
Morse code the
communication;
F(1,6)
¼ (MMN;
7.41,
po
while
Morse-MMNm was
stronNon-speechthe
sounds;
Plasticity
f 3 months,
pattern
MMNm
or
speech-P1m
magnitude,
loci,
or
latency
between
ger
than
the
Morse-P1m
(main
effect
of
component,
Morse
of the mean Morse-MMNm magnitudes of both hemide MMN became lateraNon-dominant
the
first
sessions.
The
MMNm
was
stronger
than
spheres was 63 nAm before
the training and 45 nAm after
minant for the speechF(1,6)
¼ and
7.11,second
p o 0.05).
The
P1m
and
MMNm
also
differed
efter
the training. Thus, more processing
resources were required
hemisphere
y traces for the Morse
the
P1m
response
(main
effect
of
component
F(1,6)
¼
17.13,
in
magnitude
between
the
sessions
and
the
hemispheres
when the tone patterns were still
novel to the subjects. With
mismatch negativity (MMN), the brain’s automatic response
28
17
hin INTRODUCTION
the hemisphere that
20
20
to change in repetitive auditory stimulation [5]. The
communication can be mediated with sounds
p(session
othat
0.01).
the training, activation unnecessary for a certain function to
" hemisphere interaction, F(1,6) ¼ 22.41, p o 0.01).
acesHuman
for natural
speech
Speech-MMNm Fig. 2. The mean P1m and MMNm strengths (nAm) for the Morse coded
principal generators of the MMN response, typically Speech-MMNm
are acoustically completely different from speech. Morse
occur presumably dropped out [25] and a ‘‘sharpening of
esumably,
the close
assoactivated between 100 and 250 ms after change onset, dominant
are
code comprises
combinations
of acoustic (or written) dots
hemisphere
non-dominant hemisphere
and spoken syllable
contrasts in the speech-MMNm dominant and nonMoreover,
the P1m and MMNm magnitudes
varied between
located in both auditory cortices [6,7]. Overlapping with the
and dashes, which
to the letters of the alphabet.
tuning’’ of the neuronal representations [26] took place,
representations
ofcorrespond
the
dominant
hemispheres
before
(¢rstof session)
afteractivation
(second session)
bilateral MMN to
changes in the acoustic features
of
speech
For example, the letter ‘a’ is signalled by a dot followed
by asessions,
Fig.
3.
The
mean
strength
of
MMNm
ECDs
(nAm)
for
the
spoken
and
resulting
in the
decrease
the total and
neuronal
the
components,
and
hemispheres
(session
"
!
c the reverse pattern (# " ) signals
The
ofa separate
training
on Morse
code
processing: The
rt 14:1683^1687
sounds [8],
MMN subcomponent
isMorse
elicitedcoded
by the
dash ( " #), whereas2003
the ‘n’.effect
1st
2nd
1st
2nd
syllable contrasts portrayed with smoothed colouring
on aMorse
learning
code.
Bars
indicate
related
to the
Morse
code s.e.m.
processing.
purely
phonetic,
or
phonological
[9,10],
properties
of
the
The duration of a dash is three times that of a dot.
Silent
component
"hemisphere
interaction,
F(1,6)
¼
po
0.05),
standard
brain
surfacewere
over9.11,
thestronger
mean
ECD
location.
Morse-MMNm
and Morse-P1m
responses
in Results show that
Session
Session
Interacting
Minds
Centre
speech sounds of a language attained in infancy [11,12] or
gaps equaling to the duration of the dash separate the
The
MMNm
for
the
Morse
stimuli
prior
to
learning
the
learning
the balance in processing is shifted to the
in life
[13].
This phoneticNeuroscience
MMN
is with
typically
left- Morse code,
different letters and words.Center
Thus, the coded
is a MMNm
University
of message
Functionally
Integrative
as the
magnitude
differed
between
thehemisphere.
sessions
code could only be aAarhus
response
to the changes in the acoustic
the
firstlater
than
in
the
second
session
(main
effect
of session
speech-MMNm
dominant
For and
each condition and hemirse sequence
code communication;
Speech-MMNm
lateralized in right-handed individuals [3,14–17]
and is
of tone patterns which otherwise follows the
[email protected]
Aarhus University/ Aarhus University Hospital
of theand
tone second
patterns, merely
reflecting
sphere,
thefor
mean ECD location
projected
on2).
a triangle net
represent- in features
an index the
of neural
memory
traces
morphology and syntax of the language used inhemispheres,
ordinary
similar
the first
sessions
(26 change
vs 28 nAm,
P1m
magnitude
did was
not
(Fig.
F(1,6)
¼therefore
7.41,considered
pwhereas
o 0.05),
while
the
Morse-MMNm
was
stronof Linguistics
acoustic language elements [1] and theiring
lateralization
communication.
a standard brain surface. The nearest node was assigned the mean
detection based on Dept.
short-term
memory traces developed
Dominant
[18,19].
Therefore,
an
MMN-magnitude
difference
between
Due to the one-to-one mapping between the phonemes
respectively),
whereas
in
the
speech
non-dominant
hemiA further
analysis
on Morse-MMNm
showed
thatandinthethe
Aarhussession
University [5]: these sounds were not
ECD strength
(given
below
image)
six neighboring ones a half
ger
than
the Morse-P1m
(main
effect
ofeachcomponent,
during the experimental
the hemispheres (which can be determined easily with
and letters in the Finnish language, for Finnish-speaking
of that value; all other nodes were set to zero. The intermediate values
familiar
to
our
subjects
prior
to
the
training
and
thus
no
Non-dominant
MMNm, the MEG counterpart
of
the
MMN)
to
a
contrast
of
Morse code users, the tone patterns of the codedspeech-dominant
message
sphere, the Morse-MMNm magnitude dramatically dehemisphere
(as
indexed
by the
MMNm
to
F(1,6) ¼native-language
7.11, p ophonemes,
0.05).
The P1m
and
MMNm
also
differed
as well as to were
that ofobtained
larger by bilinear interpolation.
match not only the letters but also the phonemes of the
long-term memory traces
for them could possibly exist.
AARHUS
hemisphere
linguistic units
[18–22], indicates,
presumably,
the dominant
spoken language. Therefore, Morse code communication
creased (from
37 astofor17
Fig.
3); this
was traces
reflected
in
the
speech
stimuli),
the
Morse-MMNm
magnitude
was
in magnitude
between the sessions and the hemispheres
However,
anynAm;
language
[3,4,11–13],
permanent
hemisphere for their long-term memory traces.
might have access to the system of the permanent cortical
n’srepresentations
automatic response
for
the
Morse
coded
letters
are
needed
for
the
automatic
When Morse coding skills are acquired,
permanent session ! hemisphere interaction, F(1,6) ¼ 17.01,
(recognition templates [1,2]) for the nativesignificant
(session " hemisphere interaction, F(1,6) ¼ 22.41, p o 0.01).
Fig. 2. The mean P1m and MMNm strengths (nAm) for the Morse coded
language speech
stimulation
[5].sounds
The[3,4]. These long-term memory representations for the units of a completely new type of
au
traces, like any experience-based memory traces for sounds,
p o 0.01).
communication system have to be formed. We aimed at
mapping of the acoustic stimuli onto their linguistic
and spoken syllable contrasts in the speech-MMNm dominant and non-
UNIVE
(Berliner, 1994; Monson, 1997; Vuust, 2000). This suggests that
skilled jazz musicians may have developed a high sensitivity to
subtle deviations of rhythm. To test this, we used MEG to study the
strength and lateralization of pre-attentive responses of the central
nervous system (MMNm) to deviations from a pattern of rhythm in
expert jazz musicians and musically inept non-musicians.
MMN & LÆRING
Materials and methods
To mimic communicative cues in improvised music, we made
sequences of increasingly incongruent rhythm, using realistic
may be experienced as if the music dstumblesT. Both
musical expectancy and communicative valence, sIII c
be characterized as a much stronger sign of incongru
that was substantiated by subjects’ unanimous rating
equally or more disturbing than sII.
Eight inept non-musicians (6 men and 2 women
expert jazz musicians (8 men and 1 woman; educ
Sibelius Academy of Music, Helsinki, Finland), gav
consent to participate in the study, approved by
Committee of Helsinki University Central Hospital. W
aptitude for rhythm with a modified version of the im
used as part of entry examinations to music conse
MUSIKERE
www.elsevier.com/locate/ynimg
NeuroImage 24 (2005) 560 – 564
To musicians, the message is in the meter
Pre-attentive neuronal responses to incongruent rhythm are
left-lateralized in musicians
Peter Vuust,a,b,* Karen Johanne Pallesen,a,c,d Christopher Bailey,a,c Titia L. van Zuijen,e
Albert Gjedde,a Andreas Roepstorff,a,f and Leif astergaarda
a
Centre for Functionally Integrative Neuroscience, University of Aarhus, Aarhus, Denmark
P. Vuust et al. / NeuroImage
Royal Academy of Music, Aarhus, Denmark
BioMag Laboratory, Helsinki Brain Research Centre, Helsinki University Central Hospital, Helsinki, Finland
d
Neuroscience Unit, Helsinki Brain Research Center, Institute of Biomedicine/Physiology, University of Helsinki, Helsinki, Finland
e
Cognitive Brain Research Unit, Helsinki Brain Research Centre, Institute of Psychology, Helsinki University, Helsinki, Finland
f
Institute of Anthropology, Linguistics and Archaeology, University of Aarhus, Aarhus, Denmark
b
24 (2005) 560–564
563
c
Received 23 June 2004; revised 17 August 2004; accepted 27 August 2004
Available online 11 November 2004
Musicians exchange non-verbal cues as messages when they play
together. This is particularly true in music with a sketchy outline. Jazz
musicians receive and interpret the cues when performance parts from
a regular pattern of rhythm, suggesting that they enjoy a highly
developed sensitivity to subtle deviations of rhythm. We demonstrate
that pre-attentive brain responses recorded with magnetoencephalography to rhythmic incongruence are left-lateralized in expert jazz
musicians and right-lateralized in musically inept non-musicians. The
left-lateralization of the pre-attentive responses suggests functional
adaptation of the brain to a task of communication, which is much like
that of language.
D 2004 Elsevier Inc. All rights reserved.
In support of neural dissociation between music and language
processing, lesion studies and studies of people suffering from
acquired and congenital amusia show a double dissociation
between aspects of music and language processing (Ayotte et al.,
2000; Liegeois-Chauvel et al., 1998; Mendez, 2001; Peretz and
Coltheart, 2003; Peretz et al., 1994, 2002) as well as a greater
involvement of the right hemisphere in basic music processing of
especially pitch-related features (Samson et al., 2002; Tervaniemi
and Hugdahl, 2003; Zatorre, 1988; Zatorre et al., 2002). Brain
responses to auditory stimuli, however, are not only determined by
physical properties of the stimuli, and the nature of cognitive
operations involved. In many cases, the listeners’ competence and
familiarity with the stimuli affect neuronal processing. This has
Keywords: Pre-attentive neuronal responses; Incongruent rhythm; Musicians
been shown for pre-attentive processing of language at 100–200
ms after stimulus onset as indicated by the mismatch negativity
(MMNm), recorded with magnetoencephalography (MEG). Left
lateralization of the MMNm occurs to deviating phonemes from
Introduction
subjects’ mother tongues only (Näätänen et al., 1997), and
deviating Morse code syllables in Morse code trained subjects,
Music is one of the many ways humans communicate with each
only (Kujala et al., 2003), suggesting that left lateralization of
other. Whether music and verbal language share neuronal networks
sounds occur when they are perceived as meaningful.
or music is processed in separate brain modules holds the key to
Although music—in contrast to verbal language—rarely refers
understanding music in an evolutionary perspective (Huron, 2001),
to objects in the real world, musical performance involves
as well as music in the context of human communication. Hence,
communication.
When musicians play, they exchange non-verbal
this issue has been hotly debated in the growing field of neuroto sIIIasinmessages,
one expert
subject.
Direction
slice.1. (a) Note representation of the stimuli; arrow indicates the time of recording. 8th notes interval is 312.5 ms except at the 2nd bar of sIII (105
andand
fromone
thisinept
interaction,
music
emergesofasthe
a dipoles are projected onto the individual coronal MRFig.
biological research in music.Fig. 2. (a) Location of dipoles (MMN) signs
concrete
form (Sawyer,
2004).
Musical
for calculated on the basis of dipole amplitudes. (c) Latency
timeof courses of averaged magnetic evoked responses from (b) an inept non-musician and (c) an expert musician. Signals are recorded at the audito
The relative amplitude (left expert: 60 nAm)
represented
by size
of arrows.
(b) communication,
Asymmetry index
example, musical humor (Huron, 2004), is often conveyed through
the MMNs in experts and inept subjects.
plotted separately for sI, sII and sIII.
violation of musical expectancy. Music theory explains musical
* Corresponding author. Centre for Functionally Integrative Neuroexpectancy
as the motion between opposites as in harmony the
science (CFIN), Norrebrogade 44, Hospital, 8000, Aarhus University,
Aarhus University
tension of the dominant chord resolved by the motion to the tonic.
Aarhus, Denmark.
language
elicit MMN-response in This
the left
hemisphere,
while
phenomenon
is referred
to asnonan example ofstudy,
musicalKujala
syntax et al. (2003) showed that the amplitude of the
E-mail address: [email protected]
(P. Vuust).
University/
Aarhus
University
Hospital
MMNm
to Morse code reversed lateralization from the hemisyllabic
sounds
activated
the right
hemisphere
theauthentic
midline.
Thus,has been shown
and violations or
of the
cadence
to activate
Available online on ScienceDirect
(www.sciencedirect.com).
Center of Functionally Integrative Neuroscience
prior to attention, expert jazz musicians process rhythmic signs in
hemisphere where also phonemes are
processed in brains of highly competent users of a language.
It is widely held that music and language employ separate
neuronal networks (modules) especially for the spectral aspects of
music whereas less is known about the temporal aspects of music
(Peretz and Coltheart, 2003). At the level of the auditory cortex,
the left hemisphere appears to be specialized for the processing of
fine-grained temporal stimuli necessary for language comprehen-
1053-8119/$ - see front matter D 2004 Elsevier Inc. All rights reserved.
the auditory cortex of the left
doi:10.1016/j.neuroimage.2004.08.039
sphere opposite to the one dominant for the MMN to native
language sounds to the speech hemisphere after 3 months of
intensive training. This result indicates that once a rhythmic
pattern conveys meaning it is pre-attentively processed by the left
hemisphere. In the present study, we demonstrate that there is a
correlation between musical competence and left lateralization of
pre-attentive brain responses to rhythmic cues. We propose that
this may be functionally linked to the left lateralization of MMN
to vowels and syllables, as shown in previous language studies.
Interacting Minds Centre
[email protected]
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
HJERNENS GÅDE #2
Review
Trends in Cognitive Sciences May 2012, Vol. 16, No. 5
Review
Box 1. a journey in time and space
Hvorfor sidder sproget til venstre?
Nearly 50 years ago, Robert Efron [5] suggested that the processing of
However, the time intervals at which the impairments emerged in both
temporal information and speech may share some resources, showing
studies (around 300-400 ms) are at the supra-segmental level in the
that patients with aphasia were impaired at overt temporal order
phonological hierarchy – that is, at the level of syllables, not phonemes.
judgments, needing longer intervals between two stimuli before they
A parallel approach used dichotic listening paradigms to reveal
could determine which came first. Efron’s finding inspired a more
hemispheric asymmetries in the processing of speech and nongeneral interest in how speech perception might be supported by nonspeech stimuli. Studdert-Kennedy and Shankweiler [62,63] demonlinguistic processes and raised the possibility that hemispheric
strated that consonant–vowel combinations are processed with a
specializations in speech perception could be accounted for in a
right ear advantage (REA; suggesting a left hemisphere dominance).
domain-general manner. It was striking that the temporal ordering
Cutting [64] used sine wave analogues of consonant–vowel (CV; e.g.
problems exhibited by the patients were notable from quite long inter‘ba’ or ‘di’) stimuli in a dichotic listening task and argued that there
stimulus intervals (ISIs – around 400 ms), which in terms of speech
was an REA for the processing of formant transitions, whether or not
perception are on the order of syllables, rather than phonemes.
these were in speech. It was striking, however, that there was no right
Tallal and Piercy [6] investigated how children with aphasia (who
ear advantage for the sine wave formant ‘syllables’ alone. In 1980,
today would likely be diagnosed with specific language impairment)
Schwartz and Tallal [65] showed that, whereas both ‘normal’ CV
Institute
of on
Cognitive
UniversityThese
College
London,
Queen (with
Square,
WC1N
3AR, UK
performed
auditory Neuroscience,
temporal order judgments.
children,
like 17 stimuli
a 40London
ms formant
transition)
and extended CV stimuli
the adult patients with aphasia, needed longer ISIs than controls to be
(modified to have a 80 ms transition) showed an REA, this was less
able to discern which sound came first. As in Efron’s study, the children
strong for the stimuli with the modified, longer profiles. Although no
with ‘aphasia’ began to show problems at relatively long ISIs (around
non-speech stimuli were tested, the authors concluded that this was
Over
past
to for
thealength
of syllables
andfor
intonation
procomparable
3 0 0 mthe
strong evidence
s). T
h e s e 30
f i nyears
d i n g s hemispheric
h a v e b e e n w iasymmetries
d e l y i n t e r p r e t in
ed
non-linguistic
preference
shorter, faster
perceptionof have
been
within
a doThese
theories
have
been widely used as explanatory
speech
files.
as demonstrations
acoustic
problems
withconstrued
rapid temporal
sequencing.
transitions
in the
left hemisphere.
Cortical asymmetries in speech
perception: what’s wrong, what’s
right and what’s left?
Carolyn McGettigan and Sophie K. Scott
HÅNDETHED?
Lateraliseringen er mangfoldig og kompliceret
Ingen entydig eller samlet genetisk forklaring
main-general framework, according to which preferential
frameworks over the past decade.
processing of speech is due to left-lateralized, non-linmodulations [11], it is unclear why spectral detail should
! No single acoustic cue underlies the way that informaguistic acoustic sensitivities. A prominent version of this
The nature of speech
tion is expressed in speech, and any one ‘phonetic’
be processed preferentially in the right hemisphere (e.g.
argument holds that the left temporal lobe selectively
It is important to consider whether these neural models
is underpinned
by a variety
of Acoustiacoustic
In some assumptions
cases, ‘spectral’
is used
meanof‘pitch’
contrastrapid/temporal
[3,7]).
information
in sound.
reasonable
about
the to
nature
the
processes
make
Inpoor
British
English, theofphonetic
difference
two(Figure
terms1).are
fact not
features.
[3,12], although
this is a
characterization
speech and
there
conveyed these
in speech
Thein
approaches
cally,
information
/apa/empirical
and /aba/ support
concernsfor
onlya the
voicing of the
synonymous.
been little
left-hemisphere
and Poeppel both clearly associate left auditory
hasbetween
of Zatorre
or these
/b/ phones;
there
are more
than
16
serious issue
for both
of
medial /p/
! Finally,
for
cues.however,
In sharp
contrast,
the
right
with aasensitivity
to temporal
or the
rapidapproaches
information.
selectivity
fields
acoustic
features
which
differ
between
the
andmay
Zatorre
identification
of the phoneme
separable
Poeppel
be is
anthe
inaccurate
characterization
of
temporal lobe is demonstrably sensitive to specific acousHowever, this
plosives, We
including
the that
length
of the pre-stop
vowel
as the fundamental unit of perceptual information in
properties.
suggest
acoustic
accounts
of
tic two
speech:
longer in /aba/
andto
thebeaspiration
release
of
That
is, the assumption
access
phoneme
beingsensitivities
speech.
need
informedonbythe
the
nature
all the
information
in speechthat
exists
overto
short
time
speech
! Not
more
pronounced
fordomain-general
/b/ [10].
is theare
cardinal
aspect
/p/ being
representations
speech
signal
and that athan
simple
vs.
Plosives, which
frequently
the of
onlyspeech
conof the
scales.
be characterized
in terms
of their spectral
and (possibly
hence that
theories
! All sounds candichotomy
perception,
may be
incorrect.
studied
because
theyneed
can to
beaccount
neatly
domain-specific
sonants
left-dominant
sensitivity to phenomena
which
and amplitude modulation profiles – in speech, for
for a into
a matrix of place/manner/presence
or absence
fitted
Explaining
imbalance
most
of the energy is carried in low spectral
underlie
perception.
Theresounds),
is considerexample,the
might
voicing
cues, phoneme
unlike many
other speech
do
of
well
agreed that,
for intelligibility
most people,can
both
It is
amplitude
modulations.
Speech
be
that this
assumption
is incorrect
[13–18].
andgenerally
able evidence
very rapidly
evolving
acoustic
structure.
Howinclude
production
andspectral
speechand
perception,
it serves
speech
when the
amplitudeas
modulations
the growing
evidence
the and
importance
of
preserved
Indeed,
fricatives,
affricates,
nasals,for
liquids
vowels are
ever,
arecoarse
functions
the leftcan
hemisphere
[1,2].
The
language,
(e.g. of
listeners
understand
speech
like onset-rime
structure,
are quite
suprasegmental
considerablystructures
longer in duration
than the
(40 ms)
often
majority
patients
with channels
aphasia have
a left-hemivast
to 6of broad
spectral
and considerable
would
call for a temporal account
vocoded
or syllables,
specified
by surely
Poeppel.
window
lesion,ofand
left dominance
for however,
speech and
lansphere
any
sound,
by itstime
verywindow
nature,on
can
existsignal,
over
! As
the aamplitude
envelope);
neither
applies
a wider
theonly
speech
smoothing
that
been reported
in intact
dichotic
guage
‘temporal’
a very wellrather
specified
term.
There
time,
of also
modulation
in isolation
canbrains
yield using
an intelligible
order isofnot
hundreds,
than
tens,
of
kindhas
on the
Aarhus
University
magnetic
stimulation
(TMS)
and
listening,
no non-temporal information available in sound. In
is
Given that speech
perception
requires
spectral
percept.transcranial
milliseconds.
functional neuroimaging (fMRI). One influential body of
turn, ‘temporal’ has been used in a variety of senses;
research over the past two decades has concentrated on the
Efron [5] used the concept of time to point out that
Box 2. Neural oscillations and temporal primitives
from
a
hypothesis that this left dominance might emerge
phonemes need to be heard in the right order to result in
Aarhus
University
The Asymmetric
Sampling in Time
hypothesis
[4] predicts
that
the AST model,
these theta
effectsTallal
were right-dominant.
of meaningful
to non-linguistic
acoustic
information
which
sensitivity
speech,
wheras
and Piercy The
[6]
ongoing to
complementary gamma activity proposed by the AST model is
oscillatory
activity
in different
frequency bands
forms
be critical
for speech.
In approaches
strongly
happens
expressly considered the processing of temporal order
computational primitives in the brain. Theta range (4-8 Hz) activity
relatively lacking in these studies (but see [8,9]). Notably, speech
influenced by hypotheses about the role of temporal inforjudgments to reflect similar processes to those necessary
should be maximally sensitive to information at the rate of syllables in
perception is not restricted to gamma and theta frequencies. Selective
speechneuronal
(Box 1),
resultingoperating
theoriesinproposed
that
mation
of phonemes
in the
for the
speech,in
attention
whereas
populations
the low gamma
to identification
speech is associated
with lateralized
alphaincoming
(8-13 Hz)
left("40
hemisphere
show atopreference
for processing
the
stream.
More
recently,
‘temporal’
has in
been
used
speech
range
activity
Hz) shouldwould
be sensitive
the rapid temporal
changes
[70],
correlated
with
enhanced
theta activity
auditory
rapid to
temporal
in sound.
example,
ofassumed
regions.
be central information
to phonemic information.
TheFor
theory
proposes
Neural
responses
to degraded
words applied
show an to
alpha
to the
amount
of smoothing
the
to refer
Læring og ekspertise spiller sandsynligvis en rolle
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Dept. of Linguistics
au
AARHUS UNIVE
TAK FOR OPMÆRKSOMHEDEN
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE
SPØRGSMÅL?
Center of Functionally Integrative Neuroscience
Aarhus University/ Aarhus University Hospital
Interacting Minds Centre
[email protected]
Aarhus University
Dept. of Linguistics
Aarhus University
au
AARHUS UNIVE