Douglas L. Beck, AuD, speaks with Dr. Lawson about his 2011 textbook on speech audiometry, critical differences in word recognition scores, dynamic range of speech, and more.
Academy: Good morning, Gary. Congratulations on the new book, Speech Audiometry.
Lawson: Thanks, Doug. My co-author, Dr. Mary Peterson and I are both delighted to have the book published and we appreciate your interest!
Academy: My pleasure. To me, speech audiometry is an essential element of audiometric testing, and it’s not an area in which many people have expertise.
Lawson: I agree. I think many people make assumptions about speech audiometry that can easily mislead them.
Academy: Sure, and the thing is, just like every other audiometric result, these data points are pieces of a puzzle and they must be taken in context. That is, check and cross-check before rendering a diagnosis!
Lawson: Absolutely. And that’s why we’ve arranged the book into pragmatic sections, which help the reader better understand the logistics from speech acoustics, speech masking, preparation for testing, and so forth.
Academy: And I have to admit, I really liked the review with regard to speech sounds. For example, the average conversational speech occurs at 65 dB SPL and has a typical dynamic range of 30 dB, which includes 12 dB above, and 18 dB below the 65 dB SPL level, or simply one could say it ranges from 47 to 77 dB SPL. However, you go further and explain that the effective dynamic range of speech may in fact be greater, such as 40 or 50 dB.
Lawson: Exactly. We tried to be concise, yet simple. The point you just made about the effective dynamic range is very important for audiologists and people with hearing loss as they are often uncomfortable and unable to process speech at higher intensities—and in general, we don’t do much speech testing at very high loudness levels. For people with normal hearing the dynamic range for listening to speech will often be larger than the intensity range over which speech is typically produced, but people with hearing loss may have a dynamic range for listening to speech that is less than the dynamic range of the speech they are trying to process.
Academy: I agree. I also appreciate the simple reference and conversion guide such that loud speech is 85 dB SPL (65 dB HL), typical loudness is 65 dB SPL (45 dB HL) and faint speech is 45 dB SPL (25 dB HL)—all of which we use daily, but it’s good to be reminded.
You also note the frequency range for speech is approximately 100 to 10,000 Hertz, which is a useful reminder as we may use high frequency audiometry for ototoxic monitoring or noise induced hearing loss measurement and monitoring—but there’s really not much speech information north of 10,000 Hz.
Lawson: I agree. If we do a good job with speech audiometry between 100 and 10,000 Hz, and from 0 dB to 100 dB, we have a pretty good idea of what’s going on!
Academy: Makes sense to me. And of course you cover the basics such as speech recognition thresholds (SRTs), speech detection thresholds, speech recognition measures, and the correct way to obtain these measures (using ascending and/or descending methods) and the importance of presenting speech in background noise and so many factors that impact speech audiometry test results—such as monitored live voice versus recorded presentations, male versus female talker, talker differences (fundamental frequency, accents etc.), word familiarity, type of stimuli, number of test items, loudness levels, and more.
Lawson: Sure. And many of us fall into a set way of doing things, and we tend to trust our familiar protocol, but there are always multiple factors to be considered, as you’ve noted earlier.
Academy: One consideration that really is astonishing, and often seems unbelievable is the issue of test-retest reliability. I recall the original Thornton and Raffin (1978) paper on binomial variables, and it took a long time for me to fully appreciate and integrate the information.
Lawson: Yes, I know exactly what you’re referring to. In our book, we use the critical difference table from Carney and Schlauch (2007), which makes very much the same points.
Academy: Please explain the concept for the readers who might be unfamiliar with it.
Lawson: Okay, I’ll give it a try. If we assume a statistically significant difference can be determined by using 95 percent critical difference criteria (which assumes a 5 percent alpha level), then the table we present on page 42 defines how far apart two word recognition (also called speech discrimination) scores must be, for the two to be statistically different from each other.
Academy: So one might say, the critical difference is the smallest increment between the two-word recognition scores (WRS) that are unlikely to be random. That is, when we choose a 95 percent critical difference criteria, we’re saying that the chance for the two data points to be genuinely unique and significant is 95 percent.
Lawson: Yes. That’s the idea. And the 95 percent criterion is commonly chosen in behavioral statistics and it’s a good statistical target for us in audiology.
Academy: And so…let’s assume a patient has a symmetric mild-to-moderate sensorineural hearing loss (SNHL), and we’re using a 25-item word recognition list. Further, suppose we present the list to the patient (live or recorded) and the first score obtained on the left ear is 88 percent, what is the statistical range of scores that would be the same on the right ear?
Lawson: Given a 25-word list, the range for a second score could be anywhere from 68 to 100 percent. That is, the score obtained in the right ear, is not statistically different from the left ear, unless the score is below 68 percent. And, as is also shown in the table (page 42), we can tighten the criteria by using more words. So if we gave the patient a 50-word list and the first score was 88 percent, given a 50-word list, the 95 percent limit would be from 74 to 96 percent.
Academy: And just for grins, supposing we stick with 25-word lists, and the first score is 52 percent?
Lawson: If the first score is 52 percent, then the range of the second score has to be beyond (above or below) 28 to 76 percent.
Academy: And that’s the part that will leave people scratching their heads….But it’s so important to read and review this information because many of us in audiology and otolaryngology tend to speak in terms of “clinical difference,” which may well be playing kind of fast and free with data. That is, many people might say that if you get 88 percent on the first ear, and then 76 percent on the second ear (using a 25-word list) that the two scores are “clinically different”—but that may not be so. And indeed, the two scores are not statistically different.
Lawson: Correct. These are very important concepts and they are powerful when applied correctly in the clinic. In the book, we note that some clinicians use categories to describe WRSs, such as 90 to 100 percent correct would be excellent or normal, 75 to 89 percent might be considered good, 60 to 74 percent correct would be fair, 50 to 59 percent would be poor, and below 50 percent would be considered very poor.
Academy: That may be the best way to go. In fact, with regard to the degree of hearing loss, we use normal, mild, moderate, severe, and profound, as the individual dB level at each frequency is not as important as the overall presentation. I also want to mention the Dubno, Lee, Klein et al (1995) chart that you present on page 69, which demonstrates the 95 percent confidence limits for WRSs as a function of the 3 frequency pure tone average (3FPTA). Can you review that one, too?
Lawson: Yes, that’s another important reference. It demonstrates that given a 3FPTA of 20 dB, and assuming a 25-word list, the 95 percent confidence level tells us the WRSs should be 88 percent or better. Likewise, given a 3FPTA of 35 dB, the 25-word list should have a score of 68 percent or better, and lastly, just to round out the numbers, given a 50 dB loss, the lower limit of the 95 percent confidence limit would be 48 percent. Scores less than those in the table for a given 3FPTA would be considered disproportionately poor, and the hearing loss has a higher probability of not being cochlear.
Academy: Gary, this is really interesting, and of course the implications are vast with regard to “rollover” testing and auditory processing disorders and more.
Academy; Okay, well, as we have limited time and space here, I’m very glad to review some of these basic , yet very important speech audiometry concepts. I think many of our colleagues will find the book to be very interesting and revealing and I encourage them to pick up a copy, and I might add, there are lots of “ah-hah” moments contained within!
Lawson: Thanks, Doug.
Academy: My pleasure, Gary. Thanks for your time.
Gary Lawson, PhD, is coordinator of the AuD program at Western Michigan University and co-author of the 2011 textbook Speech Audiometry.
Douglas L.Beck, AuD, Board Certified in Audiology, is the Web content editor for the American Academy of Audiology.