PAGE 2 BRAINHEARING TECHNOLOGY RESEARCH DON’T FORGET THE BRAIN You may have had a teacher at one point tell you “garbage in - garbage out”, but this phrase is only partially correct when it comes to the complex computer called the brain. When both our auditory system and cognitive function are intact, speech can even be distorted in multiple ways and still be understood (Davis et al 2005). All modern hearing technology changes the signal in some fashion to improve audibility. However, when the speech signal is manipulated too much, it can become distorted and actually interfere with our brain’s ability to comprehend. Therefore, we believe it is critical to provide signal processing techniques that support the brain’s natural cognitive processes. Research into the relationship between cognition and audition first began more than 30 years ago. Since then, landmark studies in Cognitive Hearing Science have shown us how cognitive factors could be incorporated into the design of hearing technology (Rönnberg et al 2011). We call this BrainHearing™. DETAILS MATTER WHEN CONDITIONS ARE SUB-OPTIMAL Consider a typical clinic situation: A new patient says, “I can’t understand my favorite television show unless I turn it up”. With a little questioning, you find out her favorite show is a British comedy and she speaks with an American southern accent. Her difficulty in understanding the British accent is an example of a sub-optimal listening condition. What she hears doesn’t sound like the patterns of speech stored in her long term memory. Any mismatch requires extra work for the brain. The Ease of Language Understanding (ELU) model explains how speech is processed by the brain in both easy and challenging listening conditions (Rönnberg et al 2008; Rönnberg 2003). Implicit processing is largely automatic and effortless when nothing interferes with the speech signal (optimal conditions).
Download PDF file