Our Research Lab

Activity Aware Hearing Aids

The chief complaint of someone with hearing loss is difficulty understanding speech in noisy environments and the primary complaint about hearing aid technology is their ability to help in those environments. We have come to realize how counterintuitive this reality is when one considers how amazingly good hearing aids are at identifying the spatial location of target speech even when that speech is presented in a background of several or many talkers and other environmental sounds. This precision is afforded in part by the exquisite environmental sound classification system built into the hearing aid itself [See Environmental Sound Classifier project].  We believe that one reason why listeners cannot take full advantage of the spatial hearing systems in the best hearing aids available is that the hearing aids do not know, from moment to moment, what talker the listener desires to listen to! The hearing aid assumes that the target speech corresponds to the loudest talker as measured at the location of the hearing aid microphones. One can easily imagine listening scenarios where this is not always true.

activity-aware-hearing-aids

activity-aware-hearing-aids

Left: 25-element speaker array for simulating static and dyamic audio scenes.

Right: KEMAR manikin on turntable with head tracker and hearing aids

The environmental sound classifier already has access to a host of simple and advanced algorithms for accurately classifying sounds in real environments and for focusing amplification in the location of the strongest speech or musical sound. But how can the hearing aid know which sound in the environment is the focal sound of the listener? One possibility is that we can add information from the listener about the listener's intention to the existing acoustic analyses of the classifier. Information about listener intention might be derived from specific head movements, from specific eye-gazes, and even from the brain activity of the listener. Any or all of these sources of information can be combined into a decision matrix to enable the hearing aid system to better process sound to accomplish the intention of the listener.

We have projects under way to learn more about head movement during conversation and activities that co-occur with listening. These involve technologies such as advanced research activity monitors, laboratory-based head tracking systems, virtual audio and virtual visual displays to allow us to synthesize different communication scenarios, and advance brain mapping and real-time EEG analyses to mock up what one-day may be possible in actual hearing devices.

 

Please consider becoming a research participant!