Super Directional Hearing Aids, Noise Reduction, and APD: Interview with Harvey Dillon, PhD
Douglas L. Beck, AuD, spoke with Dr. Dillon about the auditory processing disorders, spatial hearing disorders, auditory rehabilitation, and more.
Academy: Hi, Harvey. Thanks for your time today. It's always a pleasure to speak with you.
Dillon: Thanks, Doug. Glad to be here.
Academy: For today's interview, I'd like to focus on the work and goals of the National Acoustic Labs in Australia.
Dillon: Very good.
Academy: The NAL Web site reveals that NAL's mission is to lead the world in research and development that improves the way hearing is assessed, hearing loss is prevented and hearing loss is rehabilitated. And so, with that as the basis, which topics are you most excited about?
Dillon: There are quite a few, but as we have limited time, let's focus on hearing assessment and improving hearing aids.
Academy: Okay, that sounds great. If you don't mind, let's start with improving hearing aids.
Dillon: Sure. Well the biggest problem reported by the largest number of patients while wearing their hearing aids is, of course, their ability to hear speech clearly in noisy backgrounds. Of course we can go into the specifics if you'd like, but the bottom line is that even well-engineered hearing aids with appropriate gain and very good adaptive directional microphones are not as good as the normal hearing ear combined with a normally functioning brain.
Academy: I agree. The state of the art is that adaptive directionality is better in some situations (such as when the talker is directly in front of the listener, perhaps a meter or so away and the reverb is minimal) than omni-directional microphones, but of course, normal hearing combined with normal cognitive function is currently as good as it gets.
Dillon: That's correct at the moment. But a new opportunity has come along, which involves making directional microphones much, much better. Current directional hearing aids have two microphone ports located about a centimeter apart (or so) and the hearing aid creates directivity just from the minute differences in timing of sounds arriving at the two ports. Of course, if the two microphones were even farther apart, such as one on each side of the head, with the acoustic barrier provided by the head creating even bigger differences in the sounds arriving at the two microphones, then there is much more potential for creating greater directivity.
Academy: And so just to be clear, I think you're addressing the potential "super directionality" issue, which relies on the virtually instantaneous processing of sounds gathered from one or two microphone ports on each side of the head, and essentially transmitted through wireless systems, providing a better directional image?
Dillon: Yes, that's right. So ideally, imagine your hearing aid system has two ports on the left and two on the right, now we've got four data points from which the directional characteristics can be obtained and maintained. That is, super-directional microphones could compare "port 1" and "port 2" on the left (as is currently done), but imagine how much more information could be captured if we also compared both ports on the left to both ports on the right?
Academy: That should be fantastic and as you said, because the left and right differences are so much greater in magnitude than simply left "port 1" as compared to left "port 2," it would seem a vast array of inputs like this would allow a much greater directivity index and an improved signal-to-noise ratio across more difficult listening situations?
Dillon: That's exactly the idea. It seems that if we had super-directional hearing aids, the hearing aids would be able to approximate what the brain currently does when it's supplied with information from two normal hearing ears.
Academy: That is extraordinary and I have to agree, a very exciting starting point. I think most hearing professionals don't yet realize the difference in the loudness at the left and right ears for typical speech signals coming from one side is generally about 20 dB from 6 to 10 thousand Hertz, which is an incredible difference between the ears! And I should note, traditional hearing aids are not able to maintain and deliver those differences, which is to me, the primary reason people with traditional hearing aids don't do well in noise. That is, the interaural loudness differences (ILDs) are pretty much squashed by most non-linear circuits (such as WDRC) and consequently, the person wearing those circuits does not get the ILDs—which makes it harder to tell where the sound is coming from and their brain cannot figure out where to focus their attention!
Dillon: Yes, but even for the frequencies below 6 kHz, there are large differences in the signals at the two ears. As an example, suppose you have two talkers and the one located 60 degrees off to your left is the one you want to listen to, and the talker you want to ignore is 60 degrees off to the right. The signal-to-noise ratio at the left ear is actually about 17 dB better than at the right ear, after averaging the spectral response based on the articulation index—and so the effect is absolutely huge and with modern advanced processing, hearing aids can take advantage of that same physical phenomenon. Of course, reverberation and the distance between the talker and the listener, and background noise matters, too, but you get the idea.
Academy: Yes, and it's going to be a major game changer when these technologies are commercially available.
Dillon: I agree. Super-directional microphones will enable a much bigger improvement than the adaptive directionality currently on the market. We have made a video demo of this super-directional binaural processing and it's breath-taking. As the person listening spins around to hear each of several people talking simultaneously, the sound of the frontal talker comes into focus (so to speak) while the other people's voices (the maskers) drop away as the directional beam is relatively narrow. It's quite good technology and I feel comfortable saying people with mild hearing loss using this technology will likely have better hearing than people with normal hearing—in these same situations. People with moderate hearing loss may hear about as well as normal hearers.
Academy: That's remarkable.
Dillon: Yes, it really is. Currently when you look at someone wearing hearing aids you presume that person has a disability, a disadvantage. But with these new technologies, that person wearing hearing aids may hear better than you—and that may be why this is the game changer you mentioned earlier.
Academy: Harvey, although I don't think you've been actively doing research with regard to noise reduction technologies, would you care to make a few observations on where we are with noise reduction and where it might go in the near future?
Dillon: Yeah, well I can make a few comments, but you're right, it really hasn't been an area of study for me lately. Nonetheless, I suspect noise-reduction algorithms will get rid of the few disadvantages they have by estimating the amount of noise in the acoustic environment relative to each hearing aid wearer's hearing thresholds. That is, the amount of noise reduction applied should be related to the person's individual hearing thresholds, but again, I can't offer much more as we're not really looking at that at this time.
Academy: Okay, fair enough. Can we spend a few moments on NAL's new hearing assessment tools? I know NAL has really taken the lead on some very important new protocols as well as designing and providing auditory rehabilitative tools. Can you address some of those?
Dillon: Surely. We've spent quite a bit of time on auditory processing disorders and we believe this is something that merits a lot more research. Nonetheless, we've been carrying through this idea of spatial hearing disorders and we've pretty much completed our work on this.
Doug, as you know with your experiences and interests in auditory processing disorders (APDs), there are some children who are not able to use acoustic spatial cues like most people can. That is, these children generally have normal hearing thresholds and they have normal intelligibility for words in quiet, and very likely they have normal intelligibility in noise, too. However, if you move the speech and the noise around the child, such that the speech signal is coming from straight ahead, and let's presume the noises are on the left and the right, these children are not able to focus on the frontal sound and suppress the other sounds as well as most people can. We refer to these children as having a "spatial hearing disorder."
Academy: And these children are considered to be perhaps a "subset" of children with APD?
Dillon: Yes. That's how we think of them. That is we think there are lots of children with many different types of auditory processing issues, but the children we're talking about are the ones who cannot function well in noise, and for whom the reason is that they cannot suppress distracting sounds on the basis of their direction of arrival.
Academy: And I know you've got a test that helps identify these children…the LISN-Sentences.
Dillon: Exactly. The Listening in Spatialized Noise—Sentences test is distributed and sold by Phonak, including in the United States, and it comes from the work pioneered by Dr. Sharon Cameron. So with this test, we can evaluate the child's ability to understand speech from the front, in the presence of other distracting stories that occur on the left and right side in one sub-test, and from the front with the speech in another sub-test. The test adaptively varies the signal-to-noise ratio (SNR) and the child's task is to repeat one sentence at a time. When the child gets more than 50 percent of the words correct, the test is made harder (a smaller SNR is presented) and if the child gets less than 50 percent correct, the test is made easier. The test is easy to do, and the software automatically does the comparisons between conditions that enables the test to be diagnostic about whether the performance is outside normal limits and what the cause of this might be.
Academy: And although we're running out of time, I wonder what you might offer as far as auditory training for children who present with spatial hearing disorder?
Dillon: Well, excellent question! After all, if there's nothing we can do about it spatial hearing disorder, there wouldn't be much point in diagnosing these children. So then, what we recommend is a program we offer called "Listen & Learn."
Academy: And I should note Listen & Learn is available for purchase through the NAL Web site.
Dillon: Yes, Listen & Learn is a computer game that offers the same sort of auditory protocol as the LISN-Sentence test. That is, it teaches the child how to focus their auditory attention in front of them. The child plays the game with headphones worn, but the sounds have been processed so that they sound like they are coming from directly in front or out to the left and right. The child's task is to click onto the words that represent the target on the computer screen. Again, just like the LISN-Sentence test, when they get a correct answer the task becomes harder and when they provide an incorrect answer the task becomes easier. What we've seen is the children using the Listen & Learn progressively improve by about 10 dB SNR over three months, and indeed every child we worked with (to date) has improved their listening skills to normal, or slightly above normal performance levels.
Academy: Harvey, that's fantastic. What a breath of fresh air!
Dillon: Yes, well, we're delighted with the early success of Listen & Learn and we're looking forward to many more children going through the program.
Academy: And the program is available to parents and professionals, too?
Dillon: Yes, Listen & Learn can be purchased by professionals and/or parents via the NAL Web site. We prefer to sell it to professionals as we recommend it only for children who have been diagnosed with spatial processing disorder.
Academy: Harvey, I am so appreciative of your time. Thanks very much for sharing your thoughts and insights with regard to the extraordinary work at NAL!
Dillon: Thank you, too, Doug. Always glad to chat with you and of course, delighted to speak about the work we do at the National Acoustics Laboratories.
Harvey Dillon, PhD, is the director of research at The National Acoustic Laboratories (NAL) in Australia.
Douglas L. Beck, board certified in audiology, is the Web content editor for the American Academy of Audiology.