Audiology is the study of one of the most important senses we possess as human beings: a sense that connects us 24/7 to our environment, to other people through a real-time fusion of mind and emotion we call speech, and to the opportunities of life that arise from being in the right place at the right time.

Yet, despite all this, society largely seems to consider hearing care irrelevant. It often takes a crisis in a person’s life before someone seeks out an audiologist—if they seek us out at all.

Why is society so indifferent toward hearing care? Does it really matter if it is? And, if it does matter, what should we be doing to make hearing care relevant?

Society’s Indifference to Hearing Care

In this article, I’ll make the case that society’s indifference is a symptom of a much deeper problem, one that goes to the very heart of what we believe hearing care stands for. I’ll argue that audiology historically has adopted a condition-based paradigm and, while this has helped us reach us where we are today as a profession, it is not equipped to take us to the next stage. What audiology needs now is to shift toward a resource-based paradigm.

There are about 35.7 million1 individuals in America with an unaddressed reduction in hearing. Of these individuals, 23.4 million surprisingly are between 20 to 69 years old, which means that the majority of individuals making up this unmet need (66.8 percent) would be turned off by our traditional image of the “active retired” (Lee et al, 2011). If we’re looking for reasons for society’s antipathy toward hearing care, it certainly doesn’t help that we’re driving away two-thirds of our target audience through messages and marketing that shout “hearing care is for old people.” More sobering still, even the people we might label as “old people” don’t identify with such messages (Levy and Banaji, 2002).

The Majority Rules

The second major contributor to the general indifference toward hearing care concerns the low number of individuals who use hearing technology. Human beings tend to follow the crowd; we look for social proof of what’s considered normal or expected behavior (Cialdini, 2009). With less than 3 percent of the U.S. population using hearing technology—about 10 million individuals—hearing care is automatically rendered irrelevant to the remaining 97 percent of society.2

It appears that, even when hearing care is both accessible and affordable, there are still more potential candidates not using hearing technology than using it.

But That’s Not Me

We instead notice the individuals wearing the bigger devices, those who require more hearing power or perhaps have visual/dexterity problems. Such wearers tend to be older and/or have greater hearing difficulties, sometimes even though they are using hearing technology. This biases our concept of the typical hearing aid user.Hard of Hearing Man Illustration

Now imagine that you begin experiencing difficulties with your own hearing. You compare yourself (Hogg and Reid, 2006) to the “typical” hearing aid user and what do you notice? That they’re “older, deafer, and needier” than you.

Are you ready to join this social category of hearing aid users? Because, if you do, you know people are going to judge you the same way you’ve historically judged others in this category. You’ll be seen as “one of them.” Is that what you want? What do you do instead? You wait—until you’re ready.

You eventually find yourself at a point where the cost of not doing something outweighs the cost of doing something. But you’re in a predicament. You know you need something, but you don’t see yourself (or want to be seen) as “one of them.” Suddenly, personal sound amplifiers and over-the-counter hearing aids look inviting, don’t they? Here’s a device that will improve your hearing, but isn’t a hearing aid.

Your experience up to now is that hearing aids are for people who are older, deafer, and needier than you. You know this because of all of the people you’ve noticed using hearing aids (Tversky and Kahneman, 1973) and because the marketing from the profession keeps showing people who are older and deafer than the way you see yourself.

Conclusion 1: If the purpose of hearing care is primarily presented as “the provision of hearing aids” and we subsequently depict users of hearing aids as “older” (e.g., the active retired), this signals to 94.6 percent of society3 that hearing care is irrelevant, turning away 23.4 million potential candidates in the process.

Conclusion 2: If the purpose of hearing care is primarily presented as “the diagnosis and treatment of hearing loss,” the maximum societal relevance we might hope to achieve is about 14 percent.4 This minority will be perceived by the remaining 86 percent of Americans as “different than the rest of us,” a situation that fosters stereotyping and prejudice (Fiske and Taylor, 2013). This stigmatization of the minority makes it difficult for any individual needing to move into that stigmatized group (e.g., because their hearing changes), which further reduces the likelihood of that person using hearing technology at the point it might become beneficial.

Conclusion 3: We can never hope to change society’s attitudes toward hearing care unless we find a way to make audiology more universally relevant. It is therefore essential that hearing technology is not portrayed or perceived as something needed when a person is older, deafer, or needier. Likewise, users of hearing technology must not be portrayed or perceived as “different than the rest of us.” Our profession is guilty of both portrayals.

This idea of separating society into people with “normal hearing” and those who are “hearing impaired” is not only incapable of penetrating society’s indifference, it is likely sustaining it. We have had decades of history to prove this (Kochkin, 2009), yet we continue to teach and pursue the same worn-out strategies. It is becoming increasingly clear that audiology needs a paradigm shift.

Confusion in Determining Candidacy

When we consider how long audiology has been discussing the low and delayed uptake rates for hearing technology (Kochkin, 2009) described above, it is clear that the paradigm we have inherited keeps hitting a brick wall. We talk of the need to make hearing care more accessible and more affordable (National Academies of Sciences, Engineering and Medicine, 2016), all of which has a role to play. But, even in the United Kingdom—where more than 80 percent of hearing technology is provided at no cost by the National Health Service and the public has excellent local access—the true uptake rate there is still only about 28 percent of potential candidates.5

It appears that, even when hearing care is both accessible and affordable, there are still more potential candidates not using hearing technology than using it.

Defining the Candidate

What do we mean by potential candidates? Surely the way we define candidacy determines our calculations of prevalence and uptake, doesn’t it? Why do we even believe uptake should be higher?

If we’re basing our estimates on self-reported data, such as that found in the MarkeTrak (, 2018) and EuroTrak (, 2018) surveys, we’re suggesting candidacy is related to the subjective perception of hearing difficulties, which ignores the well-known phenomenon that other people normally notice our difficulties before we do. Interestingly, when these same candidates subsequently tell us in those same surveys that the reason they don’t use hearing technology is because their “hearing loss is too mild” or “not severe enough” (Kochkin, 2007), we now conveniently choose to discard the validity of their subjective perception! It appears we want it both ways.

If we’re not happy with the subjective perception of candidacy, then what if we choose to extrapolate from measured audiometric thresholds in a sample of the population (Agrawal et al, 2008; Lin et al, 2011)? To do so, we would generally base our candidacy criteria on pure-tone averages greater than a certain threshold. The deciding factor for candidacy then becomes the threshold at which a person is expected to benefit from hearing technology.

Deciding where to set this threshold introduces another challenge entirely. Should we be basing the threshold on when someone is more likely to subjectively perceive benefit from using hearing technology, as suggested by our use of outcome measures such as the Client-Oriented Scale of Improvement (Dillon and Ginis, 1997) and the Abbreviated Profile of Hearing Aid Benefit (Cox and Alexander, 1995)? Or should we base the threshold on something more concrete, even if the perceived benefit proves to be minimal?

Conclusion 4: Our traditional paradigm consistently fails to reconcile measured thresholds with the subjective perception of hearing difficulties and benefit. Consequently, the professional and public alike remain confused about when a person actually becomes a candidate for hearing technology.

We’ve known for decades that there’s a mismatch between measured thresholds and subjective perception, but our dilemma is this: If the measured thresholds class a person as “hearing impaired” but that person doesn’t perceive hearing difficulties, is it still appropriate to fit hearing technology? Perhaps a resolution would come if we could unambiguously identify the risk of not using hearing technology.

The “Missing” Risk of Untreated Hearing Loss

Consider the following medical conditions: cancer, diabetes, heart disease, dementia, osteoporosis, AIDS, arthritis, macular degeneration. A layperson understands the importance of early diagnosis and effective treatment to mitigate the risk from these conditions, even if they can’t subjectively perceive any difficulties.

How does the layperson perceive the risk in not diagnosing and treating hearing loss? That they need to turn the TV up louder? That they find noisy restaurants intolerable? That family members complain more often? Somehow, such risks do not convey a sense of urgency. So why bother?

In recent years, audiology has initiated a quest to find the “missing risk” in hearing loss. So far, we’ve linked it to everything from depression (Li et al, 2010), to loneliness (Nachtegaal et al, 2009), strokes (Lin et al, 2008), poorer driving skills (Hickson et al, 2010), heart disease (Susmano et al, 1988), obesity (Curhan et al, 2013), diabetes (Bainbridge et al, 2008), falling (Lin and Ferrucci, 2012), hospitalization (Genther et al, 2013)—and even to an earlier death (Genther et al, 2014)!

Whether hearing loss actually plays a causative role in these risks conveniently is overlooked in our social media feeds. To further add to the confusion, many of these types of studies, such as the ones mentioned above, either do not separate treated hearing loss from untreated hearing loss or else find no significant effect from the use of hearing aids. Finding evidence that hearing technology reduces these highlighted risks is, to put it politely, a challenge.

The most widely publicized of these risks come from studies that suggest that even mild hearing loss appears to increase a person’s risk of cognitive decline or cortical changes (Lin et al, 2011; Campbell and Sharma, 2013; Campbell and Sharma, 2014; Lee et al, 2018). Notwithstanding the debate as to whether such correlations reflect cause (Dawes, 2017), we must remember that merely linking mild hearing loss to an increased risk is not the same as saying “mild hearing loss is the threshold for candidacy.” To say so would imply that the risk we’ve highlighted can be mitigated by fitting hearing technology, and while some studies suggest this may be the case (Amieva et al, 2015), others question it (Dawes et al, 2015).

Reduced audibility may be causative in cognitive decline, but can we really expect to find it from an average of four sine waves? 

Perhaps such inconclusiveness is to be expected if we’re hoping to untangle the complex and adaptive interactions of hearing, aging, cognition, and psychosocial factors. But it certainly doesn’t help our quest when we’re relying on arbitrarily established “grades of hearing impairment” established using pure-tone averages (World Health Organization, 2018).

Throwing Away the Data

Do we really believe that all of the different reasons that a person’s hearing might be reduced—including noise, genetics, diabetes, obesity, smoking, ototoxicity, infection, heart disease, Menière’s, osteoporosis, stress, or something else—are biologically equivalent? Are we really saying that what’s important, out of all the possible complex interactions taking place behind the scenes, is the average of a person’s ability to hear four sine waves set an octave apart? Does that sound like evidence to you?

Reduced audibility may be causative in cognitive decline, but can we really expect to find evidence from an average of four sine waves? It seems highly unlikely.

For example, try calculating both the pure-tone average (PTA) and Speech Intelligibility Index (SII) (Killion and Mueller, 2010) from the following thresholds: 250 (30), 500 (10), 1000 (5), 2000 (15), 3000 (55), 4000 (50), 6000 (45), 8000 (40). You should get a four-frequency (i.e., average of 500, 1000, 2000, and 4000 Hz) PTA of 20 dB HL and an SII of around 63 to 64 percent (using the “Count-the-Dots” method), meaning that this individual with “clinically normal hearing” is missing around one third of the sounds that make up speech! If we were universally using the SII instead of the PTA, at least we could begin asking incisive questions      such as the following:

How is the brain dealing with those missing sounds?

What compensatory mechanisms or shifted load might be taking place cognitively and/or socially?

Is there any increase in stress levels or reactive oxidative species resulting from the increased mismatch between perception and expectation?

Does this person with clinically normal hearing have an increased risk of dementia like counterparts with hearing impairment who have a PTA of 26 dB?

Does this risk go away if we restore access to the missing 36 percent, or does it remain anyway?

If we’re using the PTA, this individual has clinically normal hearing, which means questions such as these never get asked!

If we ever hope to isolate cause from correlation, we need to at least begin preserving enough information in our data to relate it to something meaningful—such as the brain’s access to external speech sounds—rather than rely on these arbitrary grades. No wonder it’s been so hard for us to make the case for hearing technology (Dawes et al, 2015).

Conclusion 5: Our current dependence on grades of hearing impairment is a symptom of having a paradigm that needs a way to separate the clinically normal from the hearing impaired, which itself is a symptom of adopting a medical model: diagnose a condition (hearing loss) and treat it accordingly (hearing technology).

Conclusion 6: This condition-based paradigm is constraining our thinking to such an extent that we lack the data and tools to answer three of the most fundamental questions of audiology: (a) what is the risk of reduced hearing?; (b) how reduced does hearing need to be before there is a genuine risk?; and (c) how can we measure the effectiveness of our interventions in reducing this risk?

Toward a New Paradigm: Shifting the Shortfall

One thing often overlooked under the traditional paradigm is that our hearing appears to benefit others as much as it benefits ourselves—because, when our own hearing capacity is reduced, we end up shifting the shortfall onto other people. They have to repeat themselves. They have to tolerate the louder TV volume. They have to talk more loudly. They have to forgo socializing because their partner can’t cope with the noise.

Hard of Hearing Woman IllustrationThere’s a case to be made that we have a social responsibility to keep our own hearing maintained, whenever possible, for the sake of others because we generally all expect to hear each other the first time, accurately.

Let’s pursue this idea of “shifting the shortfall” a little further. Imagine that we had a shortfall in our income one month, but we still needed to pay all our bills. What would we do? We’d probably borrow from elsewhere, perhaps from a friend or maybe a loan company. We would shift the shortfall. If our own capacity had been greater, we wouldn’t have needed to shift the shortfall; we would have had more opportunities available to us without this dependence on others.

We can see a relationship here:

We use resources (such as money) to fulfill our goals.

The greater our resource capacity, the more goals and opportunities become available to us.

When there is a shortfall between our goals and our available resources, we must either use an alternative resource or abandon/compromise our goals.

Hearing as a Resource

We use hearing to help us meet our goals by connecting to others through a real-time fusion of thoughts and feelings called speech. This, in turn, establishes a shared and stable foundation for human development and achievement, as can be seen from our reliance on spoken language in relationships, education, business and commerce, health care, politics, and entertainment.

The greater our hearing capacity, the more goals and opportunities become available to us, as individuals and society.

If there is a shortfall between our hearing capacity and our goals, we must either use an alternative resource (such as using vision to sign or lip-read) or abandon/compromise our goals.

The Emergence of a New Paradigm


When we understand hearing capacity as a resource, other things fall into place, too.

Instead of having a reputation for stigmatizing people by separating the hearing impaired from the clinically normal, audiology becomes known for maximizing individual and societal potential, generating greater access to shared opportunities and goal achievement through maximizing hearing capacity. Likewise, hearing technology ceases to be seen as a symbol of being older, deafer, and needier.

The need for hearing care is no longer about whether I might be a “candidate for treatment” and “different because I have a condition.” Instead, it’s about being all you can be, whatever your stage in life. Such aspirations tap into a human being’s most primal needs (Maslow, 1943), automatically creating desire and positivity where once there was antipathy and negativity.

In addition, when we see hearing capacity as a resource and that capacity is reduced, the risk to an individual from not using hearing technology becomes immediately apparent.

For a start, that individual is undermining his or her own potential and independence, just as they would by avoiding education or work. And, any shortfall in hearing gets dynamically shifted to other available resources, forcing social, cognitive, and visual systems to increase their efforts accordingly.

As a result of this shortfall, family and friends must make extra effort to repeat themselves. The individual’s memory must work harder to fill in the gaps. Her or his attention and vision must focus carefully to pick up on facial cues. If the shortfall proves too great and exceeds even the capacity of these alternative resources, the access to opportunity is compromised and the individual’s own potential fades.6 As a result, the individual loses out and society loses out.

Increase the individual’s capacity with hearing technology and we reduce this shortfall.

Suddenly, the insecurity we have felt in audiology—the need to justify the importance of hearing loss in the face of “more serious” medical conditions such as cancer, heart disease, and Alzheimer’s—melts away. We are no longer in competition with these conditions. We are operating in an entirely different sphere. We have positioned hearing health care right where it belongs: at the heart of maximizing and maintaining human potential and achievement.

It is such a simple shift in the way we view and present audiology—by promoting hearing capacity rather than hearing loss—yet it has the power to shatter society’s indifference and open new opportunities for future research.

This may seem like a bold statement, but as Kuhn (2012) wrote more than 50 years ago: “When paradigms change, the world itself changes with them.”