By Emily Sandgren and Joshua M. Alexander
This article is a part of the July/August, Volume 35, Number 4, Audiology Today issue.
Although speech-in-noise perception is the primary focus of hearing aid innovation, treatment goals, and outcome measures, less attention has been devoted to improving music perception with hearing aids. Patient-centered care includes considering other contributors to quality of life, such as sources of entertainment. Music is a significant source of entertainment contributing to the quality of life, as highlighted by data indicating that Americans 16–65 years of age and older listen to music for more than two hours per day (Delmonte, 2018).
Music listening is also important to hearing aid users. Greasley (2022) reported that hearing aid users actively engage with music, yet found that they often experience negative emotional consequences when they disengage from music due to the negative sound quality when using hearing aids (Greasley et al, 2020). This problem is well documented—hearing aid users consistently report dissatisfaction with sound quality, especially distortion, when listening to music with hearing aids (Greasley et al, 2020; Greasley, 2022; Madsen and Moore, 2014).
Several factors might contribute to degraded music quality by hearing aids optimized for speech, including the output of the signal and signal processing features. Average preferred listening levels for recorded music are around 80 dBA (Croghan et al, 2016) compared to average speech levels that are around 65 dBA.
In addition, the peak levels for music can be substantially higher than for speech when they are at the same average level. Combined, these two properties suggest that input levels to the hearing aid microphone can approach the saturation limit of the hearing aid, especially at very low frequencies where music is most intense. When input levels approach or exceed the saturation limits, distortion may occur. Chasin (2022) identified the analog-to-digital converter as a likely culprit of the saturation limit.
Signal processing features used for optimizing speech also may contribute to degraded music quality by hearing aids. A partial list of features that might degrade music quality includes wide dynamic range compression (WDRC), directionality, noise reduction, transient sound reduction, feedback suppression, and frequency lowering. WDRC can be problematic when listening to studio-recorded music because it is often compressed; furthermore, increasing the number of independent WDRC channels can negatively affect the spectral balance of the music (Chasin, 2022).
Directionality and noise reduction may distort the temporal features of music, and transient sound reduction can be problematic because transients are a common feature of music, especially with percussive instruments (Chasin, 2022). Feedback suppression is known for entrainment, which distorts the intense tonal music elements. Finally, frequency lowering may alter the harmonic relationships of music and affect overall timbre if the destination region includes the low frequencies.
In light of how hearing aid features designed for speech enhancement can degrade music quality, manufacturers have created default music programs that deactivate or minimize the effects of most of the features listed in the previous paragraphs. However, for most brands, information about how the default music program differs from the default speech program is proprietary or unavailable, leaving clinicians to guess whether they might be effective for their patients.
Therefore, it is not surprising that almost 60 percent of hearing aid users indicated that music was not discussed in the clinic (Greasley et al, 2020) or that only 25 percent to 33 percent of hearing aid users report having a music program (Greasley, 2022; Looi et al, 2018; Madsen and Moore, 2014).
To help clinicians and patients make more informed decisions, we evaluated the efficacy of the default music program compared to the default speech program across seven leading hearing aid brands. Previous studies on the efficacy or effectiveness of music programs have found mixed results. For example, Madsen and Moore (2014) reported little difference in music listening experiences between survey respondents with and without music programs.
In addition, in a study by Vaisberg et al (2019), musicians reported that the music programs in their hearing aids did not provide benefits. However, Vaisberg et al (2017) reported that regular hearing aid users listening to music samples recorded from different hearing aid brands rated the music program higher than the speech program for two brands and the same for two other brands. They also reported bigger rating differences between brands than between programs within the same brand.
Similar to Vaisberg et al (2017), we compared the default music program to the default speech programs across brands. However, there were several methodological differences, including the number of brands tested, the use of absolute instead of comparative ratings, and the use of listeners with pure-tone hearing sensitivity < 20 dB HL. Participants with normal pure-tone hearing sensitivity were used to test the effects of hearing aid processing independently of the effects of hearing loss, assuming the best/worst conditions will be the same for hearing aid users.
Seven brands of receiver-in-canal (RIC) hearing aids (TABLE 1) were programmed to the National Acoustic Laboratories NAL-NL2 prescriptive targets and verified in a Verifit 2 test box for the N3 audiogram, representing a typical mild sloping to moderately severe hearing loss (Bisgaard et al, 2010). All advanced features were at the default settings, except the automatic environmental classifier and frequency lowering, which were deactivated.
After verifying the default speech program, the default music program was added; no additional changes were made to the gain. The test box was also used to present and record short snippets (8.7 to 16.5 seconds) of culturally inclusive music samples of varying genres at an 80 dB SPL presentation level, using each hearing aid in the speech and music programs. This study was approved by our Institutional Review Board.
Seventy individuals aged 18–24 years with pure-tone hearing sensitivity < 20 dB HL (49 indicated they were female) participated. Half of the participants were classified as musicians based on self-reported 7+ years of formal music training. Half the participants were classified as non-musicians, having five or fewer years of formal training.
Participants rated the sound quality of 10 music samples on a seven-point scale: 1 (unacceptable), 2 (very poor), 3 (poor), 4 (fair), 5 (good), 6 (very good), 7 (excellent). In addition to the 14 hearing aid conditions (seven brands in the speech and music programs), one control condition consisted of the original music snippets. The corpus of music samples were presented in a completely random order for each participant.
FIGURE 1 shows the average ratings across the 15 conditions. The results indicated that all seven hearing aids significantly degraded sound quality compared to the original control condition. To evaluate how the recording procedure may have influenced participants’ ratings of the hearing aid stimuli, 19 listeners completed the experiment with a second control condition consisting of recordings of the original stimuli in the test box but with no hearing aid. Results indicated that participants’ ratings of these stimuli were slightly lower (≈ 5.0) than the first control condition (≈ 5.5). Therefore, participants’ ratings were primarily influenced by how the hearing aids process music.
To better understand the differences between the default speech and music programs, the influence of participants’ use of the subjective rating scale was factored out by converting their ratings to Z-scores. Rescaling was necessary because participants’ mean ratings varied in how critical they were of the recordings. Also, the variability of their ratings differed in how much they used the full scale. Specifically, some participants’ ratings varied from 1 to 7, whereas others varied only from 4 to 6. FIGURE 2 shows these data.
Statistical analyses were performed on the normalized data. These analyses revealed that the ratings for the brands and programs did not differ between the musicians and non-musicians (p > 0.05). However, there was a main effect for hearing aid brand (p < 0.001). Ratings for Brand B were significantly higher than all other brands(p < 0.001), and ratings for Brand F were significantly higher than all other brands except Brand B (p < 0.001). In addition, ratings for Brand D were significantly lower than for Brands C and G (p ≤ 0.001 and p ≤ 0.01, respectively). The main effect of hearing aid program was not statistically significant (p = 0.07); however, there was a significant interaction between hearing aid program and brand (p < 0.001).
As shown by the asterisks in FIGURE 2, the music program significantly improved participants’ ratings for three of the seven brands (A, D, and E) compared to the speech program, supporting the efficacy of their music programs. Interestingly, Brands B and F, whose overall ratings were significantly higher than the other brands, were the only brands to show a significant decrease in ratings for the music program compared to the speech program. Finally, two brands (C and G) did not show a difference between the music and speech programs.
The significant interaction may also indicate that the differences between brands depended on the hearing aid program. Specifically, music quality ratings were more similar between brands in the music program than in the speech program. In the speech program, 16 of the 21 comparisons between brands were significantly different (p < 0.05), except for comparisons between A-D-E, C-E, and C-G. In contrast, the only significant differences between the brands in the music program were B versus all others (p < 0.001) and D versus F.
Like Vaisberg et al (2017), this study found that differences in music quality ratings were larger between brands than between music and speech programs. To assess whether features within the music programs across brands might account for the observed results, TABLE 2 summarizes the default feature settings of each brand’s music programs.
Each brand used a full or partial (“virtual pinna”) omni microphone response, deactivated or reduced the feedback suppression algorithm, and deactivated all or most noise reduction algorithms. These commonalities suggest that these features were not responsible for the differences between the brands in this study. In addition, compression ratios and overall frequency responses computed from the recordings did not show a consistent pattern with the music quality ratings.
Also shown in FIGURE 2 is the bit depth of the analog-to-digital converter in the hearing aid. Each bit adds 6 dB to the overall dynamic range. Chasin (2022) notes that low bit depths can clip the peaks of high input signals and degrade their quality. It is unclear if bit depth affected the music quality ratings in this study because Brand E with 16 bits and Brand G with 18 bits had some of the lowest ratings, but so did Brand D with 24 bits. However, ratings for Brand D may have been affected by its high-frequency response above 8 kHz, which was the lowest among the brands.
The bit depth of the analog-to-digital converter in the hearing aid also may have influenced the results. Another factor that might explain some results is the maximum power output (MPO), as reported during the verification procedure. Among the lowest MPOs and lowest ratings were Brand A (speech program), Brand E (speech program), and Brand D (both programs). Finally, the strongest correlations with the ratings were found for acoustic measurements (e.g., peak gain and temporal envelope distortion) associated with the processing of the ultra-low frequencies (<200 Hz).
The factors highlighted in the previous paragraphs—bit depth, MPO, and ultra-low frequency processing—may be related to each other and to attributes participants reported as forming the basis of their ratings. At the end of the study, participants were asked to list the attributes they used to make their decisions about music quality ratings. Distortion, clarity, and several other words relating to distortion and clarity were the most commonly reported attributes.
The results indicate that brand choice is more influential than programming changes made by the audiologist for the music listening experience. Unfortunately, because hearing aids, their physical components, and digital algorithms are proprietary technology, not all the differences between brands can be known or controlled in a study. More research is still needed to understand the acoustic differences that could explain the results.
While the choice of hearing aid brand is based on multiple factors, music may be a deciding factor for patients who highly value music as their source of employment, a significant source of entertainment, or a contributing factor to maintaining mental or emotional health. For these patients, the brand choice may be critical for them to remain engaged with music and maintain their quality of life.
The list of music samples and the recordings as processed by hearing aids used in the study can be accessed at https://web.ics.purdue.edu/~alexan14/music.
The authors thank Donald Hayes for his insightful feedback during the experimental design. The authors also thank Caitlin Ashby, Sarah Bullentini, Kayla Kumar, Annie Liao, Julia Malek, Annika Schenkel, and Elizabeth Scheumann for running the Purdue Experimental Amplification Research (EAR) lab while data were being collected for this project.
Bisgaard N, Vlaming MS, Dahlquist M. (2010) Standard audiograms for the IEC 60118-15 measurement procedure. Trends Amplif 14(2):113–120.
Chasin M. (2022) Music and Hearing Aids: A Clinical Approach. San Diego, CA: Plural Publishing, Inc.
Croghan NB, Swanburg AM, Anderson MC, Arehart KH. (2016) Chosen listening levels for music with and without the use of hearing aids. Am J Audiol 25(3):161–166.
Delmonte R. (2018) Audio Monitor US, 2018. https://musicbiz.org/wpcontent/uploads/2018/09/AM_US_2018_V5.pdf (accessed May 1, 2023).
Greasley A, Crook H, Fulford R. (2020) Music listening and hearing aids: perspectives from audiologists and their patients. Int J Audiol 59(9):694–706.
Greasley A. (2022) Characterising levels of hearing loss affect music listening with hearing aids. Oldenburg Music and Hearing Health Workshop 2022. https://uol.de/music-hearing-health-workshop (accessed May 1, 2023).
Looi V, Rutledge K, Prvan T. (2018) Music appreciation of adult hearing aid users and the impact of different levels of hearing loss. Ear Hear 40(3): 529–544.
Madsen SM, Moore BC. (2014) Music and hearing aids. Trends Hear 18:1–29.
Vaisberg JM, Folkeard P, Parsa V, Froehlich M, Littmann V, Macpherson EA, Scollie S. (2017) Comparison of music sound quality between hearing aids and music programs. AudiologyOnline: Article 20872. www.audiologyonline.com (accessed May 1, 2023).
Vaisberg JM, Martindale AT, Folkeard P, Benedict C. (2019) A qualitative study of the effects of hearing loss and hearing aid use on music perception in performing musicians. J Am Acad Audiol 30(10):856–870.