Research Facilities - Auditory Behavioral Research LaboratoryWe are a small group of faculty, staff and students that conduct scientific research on human hearing. We are particularly interested in how the detection and recognition of complex sounds is affected by both lawful and random variation in sound as occurs in nature. A goal of our research is the development of computational models for predicting detection and recognition performance under various conditions of signal uncertainty. Other areas of research include: the perception of auditory motion, auditory frequency selectivity and masking in normal-hearing and hearing-impaired listeners, hearing assessment of typically-developing children, evaluation of the auditory preferences of children with autism spectrum disorder, and contribution of peripheral and central factors to individual difference in sound detection/discrimination/recognition.
- Contribution of cochlear mechanics on informational masking
Current computational models of sound source identification fall far short of the human capacity for identification. Now, a recent advance in sparse signal encoding suggests a means of significantly improving the performance of these models.
Computational models can play an important role in helping to understand our ability to identify everyday objects and events from sound (Bregman, 1990). The traditional approach to modeling has relied on the extraction of structured features and Gestalt schema for identifying sound sources in a mixture (Ellis, 1996; Martin, 1999). These models require high information rates and much prior knowledge of signals for their performance, yet they still fall short of the human capacity for identification.
Now, a recent significant advance in sparse signal encoding suggests a means of improving the performance of these models. Compressed sensing, CS, (Donoho, 2006), replaces the extraction of knowledge-based features with the projection of signals onto a small set of incoherent basis functions. The result for sparse signals is accurate identification with few samples and little prior information about signals.
Our goal in this project is to determine whether CS can be included as an early stage of encoding in traditional models to substantially reduce the information rate required by these models to approach the identification performance of human listeners.
[Work Supported by NIDCD Grant #R01 DC06875]
- Computational Auditory Scene Analysis
Auditory Pattern Analysis
Information in sound is conveyed by patterns of variation across frequency and time. We have developed a computation model (the Component Relative-Entropy Model) that accurately predicts in many cases how listeners detect both lawful and random patterns of acoustic variation as occur in nature.
Our remarkable ability to process information in sound is demonstrated everyday as we make sense of the complex and continuous pattern of variation in the acoustic signals we encounter. The purpose of this project is to achieve a better understanding of this ability through a formal analysis of the ability to discriminate variable acoustic patterns made up of tones.
There are three key elements of our approach. First, all efforts are linked by a single mathematical- methodological framework where the information in a pattern is given precise meaning and listener performance is evaluated relative to a common theoretical standard. Second, the relative extent to which listeners make use of (weight) different sources of information within patterns is determined from trial-by-trial analyses of the data. Third, specific hypotheses regarding the outcome of experiments are generated based on known nonlinear transformations performed at the auditory periphery and a decision model that has made accurate predictions for the results of many past studies [R.A. Lutfi, J. Acoust. Soc. Am. 94, 748-758 (1993)].
The results of the proposed studies are intended to further our understanding of how natural redundancies in patterns aid detection in noisy backgrounds, and how listeners process invariant relations among components that define dynamic properties of patterns like those of speech and other meaningful sounds.
[Work Supported by NIDCD Grant #R01 DC01262]
Human Sound Source Identification
Our ability to identify simple objects and events from sound is crucial for normal function in the world. Understanding the normal processes underlying this ability is key to the development of effective technologies for dealing with the impact of dysfunctional hearing on everyday listening.
What information in a sound allows us to identify its source? If the physical properties of the source (and the force driving it) are known, then theoretically it is possible to determine the sound it will produce. The inverse problem is rarely as simple. If one is uncertain regarding the source, then recovering source properties from the sound, analytically, can prove quite difficult. The human auditory system excels at this task, even in the face of enormous uncertainty regarding possible sources.
How it does this remains largely a mystery. In this project we take a novel approach to investigate human sound source identification. Using the principles of theoretical acoustics, we approximate the sound pressure waveform at the ear as it is generated by a number of simple resonant objects. We then examine the listeners ability to detect the lawful covariation among parameters of the resultant acoustic waveform.
By measuring correlates between various features of the waveform and the listeners response we are able to identify the relevant aspects of the dynamic variation in the acoustic signal that listeners use to identify these sources.
[Work Supported by NIDCD Grant #R01 DC06875]
For more information, contact: