Fabricating Reality Through Language Ken W. Grant – 17 April 2015 USUHS Disclaimer The views expressed in this presentation are those of the presenters and do not reflect the official policy of the Department of Navy, Department of Defense, or U.S. Government. Fabricating Reality Through Language “…..Trying to make sense out of incomplete messages.” Zebras have black and white ______. Did you eat yet? Making Sense From Incomplete Messages 4 DRESS STRESS STRESS DRESS DRESS STRESS Thought and Language From Thinking and Speech (1934). Publisher: M.I.T. Press, 1962;. Translated: Edited and translated in part by Eugenia Hanfmann and Gertrude Vakar, and in part by Norris Minnick. Revised by Alex Kozulin, 1966 (M.I.T. Press). 14 Speech – Just as Variable Babies Can Do It 16 Washoe (1965-2007) – Chimps Can Do It 17 Stages of Speech Processing 18 Auditory Speech Recognition in Noise Percent Correct Recognition 100 90 NH – Auditory Sentences 80 70 HI-Auditory Sentences 60 ASR Sentences 50 40 30 20 10 0 -15 -10 -5 0 5 10 15 Speech-to-Noise Ratio (dB) 20 Roughly 13 dB SNR Loss with low-context sentences comparing HI subjects to NH subjects. Automatic speech recognition more closely resembles HI performance: requires a very favorable SNR to reach 100% and falls off quickly in noise You Don’t Have To Have A Hearing Loss To Having Trouble Understanding Speech • Not all noise exposures lead to hearing loss as defined by the audiogram • Common noise sources • Concerts • Firing range • Leaf blowers • Usually after 24-48 hours, hearing thresholds return to normal 20 Listening Experience Can Modify the Way We Hear • Musicians versus non-musicians encode acoustic features differently (pitch, timing, and timbre) – better speech recognition performance in noise • cABR is a brainstem response to a complex waveform (/da/) • cABR waveform, possibly generated in the inferior colliculus, is modified by experience • cABR is modulated by past experience N Kraus, S Anderson (2014). Hearing Review, August, 18-21 21 Everyday Environments Require Segregation of Sound Sources and Attention • Multiple speakers • Auditory cues to separate sound sources • Focus attention on the target speaker • What happens in the brain that allows us to do this? Our wonderful colleagues at BU 22 Processing Speed, working memory, attention and Continuous Speech 00 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.35 Time (sec) Memory encoding and rehearsal Time points of recognition – Processing words in connected speech requires a minimum processing speed (must handle roughly 220 words/min) – Words are usually recognized before end of word occurs (lexical context) – Time after recognition point and before next word starts can be used to store words in memory, rehearse previously stored words, activate next most likely lexical items 23 Processing Speech in Real Time: An Example of Sequence Buffering • What happens when the next word comes before the previous word was fully processed (noise, hearing loss)? – Abort processing of current word or delay processing of incoming word – Error rates increase (little is known about the kinds of errors made in this situation • Nonsense syllable tests typically do not have this dependence on processing time 24 Communication Breakdown • Hearing loss • Signal distortion (with or without hearing loss) • Listening history • Attention • Memory • Processing speed • Source separation – – – – Pitch Timbre Spatial separation Timing 25 A New Challenge: Blast-Exposed Normal Hearing Service Members • Clinically normal hearing thresholds • Trouble understanding speech in complex environments • Effortful listening • Tired • Depression • Isolation • Distortion? • Auditory processing? • Cognitive processing? • Assessments • Low-gain hearing aids • Brain exercises 26 • Prevalence – just how big is this problem – Need to know how many resources to devote to the problem • Assess the communication breakdown from several simultaneous angles – Early processing stages – Central processing; binaural integration – Cognitive processes • Recommend course of action – brain fitness 27 • Things we know – Audiogram doesn’t explain difficulties in speech understanding – Problem requires a multi-pronged attack • Hidden hearing loss – distortion • Central processing • Cognitive processing • Things we’re nor sure of – How big a problem is this really (initial estimates suggest up to 20% of all deployed to Iraq and Afghanistan – Can training regimens be optimized if we target the stages of processing where the breakdown occurs 28 29
© Copyright 2024