What is ‘Normal’ Hearing?
Ask an audiologist what “normal” hearing is and, not surprisingly, you will get a variety of responses (Figure 1). Certainly, normal pure-tone threshold sensitivity does not rule out hearing difficulty or the presence of auditory pathology, including cochlear and auditory neural peripheral or central deficits. Further, a number of non-auditory factors can contribute to a patient’s perceived hearing difficulty (e.g., cognitive capacity, attention, medications, etc.).
While hearing in the real world is much more complex than detecting the presence of brief pure-tone stimuli, hearing difficulty is highly related to pure-tone audiometry. How we define normal in regard to pure-tone hearing is deeply entwined in our differential diagnosis and treatment recommendations and, therefore, has significant implications for people with hearing loss.
The Reference Sound-Pressure Level
The reference sound-pressure level for plotting auditory thresholds is 0.0002 dyne/cm2 (also written as 20 ŲPa or 20 microbar). Notably, this reference sound-pressure level is found in most audiology textbooks, but the citation for a study or studies that establish this reference value is more difficult to find. Recent textbooks provide no citation.
Some of our readers, of course, know that the reference level was set by an international agreement as a value that reasonably approximated the threshold of hearing at 1000 Hz in healthy young adults. But who were these healthy, young adults? Unfortunately, you will find limited description, as the reference was, in reality, based on numerous studies over many decades.
Early work provided foundations to the ultimately agreed-upon reference sound-pressure level (Toepler and Boltzman, 1870; Rayleigh, 1877; Wien, 1903; Fletcher and Wegel, 1922; Fletcher, 1923). For example, Wien (1903) reportedly provided the first recorded estimate of the amplitude of just-audible sounds using an electromagnetic receiver as a source of sound and optical measurements of its diaphragm movement. The observer listened through a hole in a large screen 30 cm from the receiver.
Fletcher and Wegel (1922) made use of a vacuum-tube oscillator, condenser transmitters, and thermal receivers to examine hearing sensitivity in 93 presumed normal ears (males and females, at least 20 females) and reported in dynes per square centimeter (dynes/cm²). The testing was completed in a room layered with loose felt, sheet iron, and cheese cloth.
The results were compared to other work of the time, including Wien, Webster, Rayleigh, and the others listed above. The pressure required for perception fell near approximately 0.001 dynes/cm². Reviews of this early work can be found in Fletcher’s Speech and Hearing (1929).
Sivian and White (1933) also reviewed much of the research of the time and published what they believed were the best weighted means of minimum audible pressure (earphone) and minimum audible fields (sound field) in data collected from 14 presumed normal hearing persons (called observers). The observers included 10 male and four female participants mostly ages 18–26 years (one participant was 40 years old). At 1000 Hz, the minimum audible field required was reported as 1.9 x 10–16 watts/cm².
Around the same time, Fletcher and Munson (1933), in their paper “Loudness, Its Definition, Measurement, and Calculation,” identify their reference intensity as 10–16 watts/cm² corresponding to a pressure 0.000204 bar, the first published use of this reference the author could find. The explanation: “an intensity of the reference tone in air of 10−16 watts/cm² was chosen as the reference intensity because it was a simple number, which was convenient as a reference for computation work, and at the same time it is in the range of threshold measurements obtained when listening in the standard method…”
In 1949, at the International Congress of Audiology in London, Edmund Fowler and Erhadt Lüscher proposed many standards, including 0.0002 dynes/cm2 as the standard for sound pressure at 1000 Hz. The remaining frequencies were proposed to be set to Fletcher and Munson’s equal loudness contours that used the same reference because it was an easy number and fell within the general range of minimal audibility.
Despite this proposal for the reference value, for a few years following, variable references were used; commonly “dB above 1 dyne/cm2.” The issue, of course, is that 1 dyne/cm2 is already higher than conversational speech, which resulted in negative values.
The earliest basis for audiometric zero was based on data from the U.S. National Public Health Survey in 1935–1936. These data were the basis for the 1951 American standard (American Standards Association, 1951). After World War II, actual average normal thresholds across the frequency range, notably by groups from the United States (U.S.), United Kingdom (UK), and Japan were further pursued.
As expected, discrepancies were found. Wheeler and Dickson (1952), researchers from the UK, questioned this standard due to concerns regarding the age of the participants, the clinical condition of their ears, ambient noise level, and method of testing. So, they completed their own study.
The data from Wheeler and Dickson (1952) was completed in males only, ages 18–23 years. The data showed thresholds some 10 dB better than the American standard. Wheeler and Dickson reported their data relative to 0.0002 dynes/cm2.
Their data and those of numerous other groups (e.g., Dadson and King (1952), who also completed similar studies, in both males and females) set the standard for the International Organization for Standardization (ISO) in 1963. In 1969, the U.S. published new standards comparable to the ISO values based on updated data from Aram Glorig and colleagues (Glorig et al, 1956). These have been updated over time.
The ‘Normal’ Range Around Average ‘Normal’ Hearing
What should be the cutoff for a normal pure-tone audiogram? More importantly, what is the literature-based evidence for this selected boundary? As reviewed, the reference zero is a well-estimated guess of the lowest detectable sound a human adult with presumed normal hearing sensitivity can hear.
Of course, we expect variability around the average level and accept some variance in our measurement. In the world of research, an oft-used cutoff for statistical significance is two standard deviations (SD) removed from the mean. In statistics, the SD is a measure of variation.
Two SD of the mean (of a normal distribution) accounts for 95 percent of the variance. The variance for pure-tone audiometry is close to approximately 5 dB SD on average (dependent on frequency); two times the standard deviation (or 2 SD) would suggest normal = 10 dB within the mean normal (0 dB HL). Applying 15 dB as the cutoff corresponds to about 3 SD, which accounts for 99.7 percent of the variance.
Statistically speaking, 15 dB HL is a conservative cutoff for normal variance around the mean of 0 dB HL for normal hearing and higher would be considered statistically significant by most statistical designs. Also, the opposite, someone with thresholds at < -15 dB HL should be recognized for super hearing! (Okay, just kidding.) However, it is likely the average lowest detectable sound level is skewed toward 0 dB HL and above.
This is also in line with what we consider a clinically significant change in hearing or a significant threshold shift (STS). A change in hearing greater than 10 dB at multiple frequencies and 15 dB at a single frequency is routinely considered a significant threshold shift (Centers for Disease Control and Prevention 1998). Unfortunately, we do not often have a baseline audiogram.
A patient complaining of hearing difficulty presenting with thresholds at 10–25 dB HL may be dismissed as having normal hearing. Nonetheless, it is plausible that, 10 to 15 years ago, their thresholds were at 0–10 dB HL.
Defining Hearing Loss Using Pure-Tone Audiometry
Numerous definitions of hearing loss exist and are related to their application. A determination of occupational injury compensation or epidemiological studies of prevalence and incidence of hearing loss may use different definitions of hearing loss based on pure-tone audiometry and other methods. For example, in the 1940s, the Army, Navy, and Veterans Administration used a conversational voice test at 20 feet. If the person could hear and repeat the words, the tester would approach foot by foot until the words were repeated correctly; 20/20 was considered normal hearing and 10/20 partial hearing loss (Carter, 1943).
Prior to the updated American National Standards Institute (ANSI) standard in 1969, the American Academy of Ophthalmology and Otolaryngology (AAOO) in 1959 devised a means for determining a percentage of loss of hearing for speech. The creation of this percent loss was based on earlier work by the Council on Physical Medicine consultants on audiometers and hearing aids for a uniform way to estimate percentage of hearing loss for speech for legal purposes.
This was set at the average of the hearing levels at 500, 1000, and 2000 Hz with 1.5 percent impairment for each decibel that this average exceeds 15 dB. The original proposals were more complex, including weighted decibel loss of the better and worse ear and statistical measures (Fowler, 1942; Sabine, 1942; Harris et al, 1955).
However, these means for determining percentage of hearing loss were based on reference zero levels that generated thresholds some 10 dB poorer than the updated standard and the corresponding ISO standard (1963). The response to “correct” this issue was to simply increase the “low fence” of normal audiometric threshold by about 10 dB HL.
The AAOO (1979) formula computing hearing impairment for speech did just that, establishing a 25 dB HL cutoff for normal hearing. However, other suggestions were abundant, including a return to using the reference 0.0002 dynes/cm² and not dB HL (audiometric zero).
Through the years, other methods were proposed with additional consideration of the ability to understand speech-in-noise. For example, early work from Kryter et al (1962) and Harris (1965) demonstrated that, with distorted signals and noise, speech understanding was compromised with elevated thresholds at 3000 and 4000 Hz.
The AAOO 1979 formula added 3000 Hz to reflect a more realistic degree of speech understanding, not only in quiet, but also in the presence of some noise. See Clark (1981), “Uses and Abuses of Hearing-Loss Classification,” for further review.
There is a rich history of defining hearing loss based on the pure-tone audiogram. However, even Fletcher, Fowler, and other foundational figures of the time recognized the limitations of pure-tone audiometry alone in the determination of hearing disability and speech recognition (see review by Harris et al, 1955). Interpretation of pure-tone audiometry and the determination of hearing loss has remained a debated topic.
In 1981, John Greer Clark completed an in-depth review of the methods to calculate hearing impairment. As Dr. Clark points out, most hearing-loss classification systems from the 1960s to the 1980s designated 25 dB HL as the cutoff for hearing loss. Yet, at that time, many investigators began to recognize the adverse effects of even slight hearing loss in children, which included Northern and Downs (1978) adopting a definition of 15 dB HL for children.
In an attempt to recognize the implications of thresholds 15 dB HL and greater in adults as well, Clark proposed a modification to the Goodman (1965) classification that included slight hearing loss from 16–25 dB HL. Martin and Clark (2000) and Martin and Champlin (2000) further advocated for a 15 dB HL upper limit of normal hearing sensitivity.
Martin and Champlin (2000) examined hearing difficulty to support this line. In their study, they reviewed data generated by a hearing aid manufacturer that included a breakdown of the degree of hearing loss based on pure-tone average (PTA) (500, 1000, 2000 Hz). It was the assumption that persons purchasing hearing aids did so due to perceived hearing difficulty.
They found that, of the 556,000 patients, nearly 30,000 (5.3 percent) had PTAs less than 25 dB HL. They, of course, acknowledged that higher frequency hearing loss was likely involved, but based on the classic definition, these persons had normal hearing.
Spankovich et al (2018), using population-based data, found that, among National Health and Nutrition Examination Survey (NHANES) participants with normal hearing, defined as greater than or equal to 25 dB at all frequencies from 500 to 8000 Hz in both ears, 10.4 percent reported hearing difficulty.
When the cutoff for normal was lowered to 15 dB HL, a slightly lower 9.9 percent of individuals reported difficulty. However, when reduced to 5 dB HL, only one participant reported hearing difficulty. This suggests that, even at 15 dB HL, patients can report some hearing difficulty, but as you approach a 0 dB HL cutoff (average normal hearing), it becomes less likely.
There is also correlation of pure-tone audiometry to objective measures of auditory function. Otoacoustic emissions (OAEs) show a relatively strong correlation with pure-tone average and fair correlation with reported hearing difficulty (Engdahl et al, 2013).
Hussain et al (1998) showed separation of transient-evoked OAEs (presented at 70–80 dB peak SPL) amplitudes when thresholds increased above 15–20 dB HL at 3000 and 4000 Hz; a clear reduction in amplitude was observed. Similar relationships have been observed for distortion-evoked OAEs (using 65/55 dB SPL primary levels) (Gorga et al, 1999).
Auditory brainstem responses show a comparable relationship to pure-tone audiometry, with general lower amplitude and delayed latency with high pure-tone threshold (Hall, 1992). Interestingly, Bramhall et al (2015) showed a relationship between wave I amplitude and speech-in-noise performance, but not until PTA increased above approximately 15 dB HL.
Guidelines, Standards, Anyone?
What do our professional organizations tell us? If you go to the websites of the American Academy of Audiology and the American Speech-Language-Hearing Association and look for their cutoffs for normal hearing, you will see a familiar value. Both the Academy and ASHA recommend the 15 dB HL cutoff of normal hearing classification based on pure-tone audiometry.
So What Is ‘Normal’ Hearing and Hearing Loss?
The better question is what is “normal” pure-tone audiometry? From the literature, we can surmise that the lowest average detectable sound pressure corresponds to approximately 20 µPa (0.0002 dynes/cm2) in young adults. And, the average lowest SPL with 20 µPa as our reference corresponds to audiometric zero. There is more than a century of research to support average normal pure-tone audiometry thresholds (i.e., audiometric zero).
Where we draw the line for normal hearing is more complicated. It is complicated because pure-tone audiometry only in part captures hearing ability. There is a strong correlation among audiometry (and various pure-tone average calculations), perceived hearing difficulty, and objective measures of auditory function. Nonetheless, pure-tone audiometry and objective measures have limitations for sensitivity and specificity to some variants of hearing loss and perceived hearing difficulty, notably for synaptopathic-neural and central deficits.
Despite this limitation, where we begin to observe deficits in functional measures of hearing, as well as perceived hearing loss, does appear to have a relationship to the average lowest detectable sounds (0 dB HL) and the range we define encompassing normal pure-tone audiometry.
An agreed-upon clinical definition for normal pure-tone audiometry is important. Based on the literature, statistics, and professional organization recommendations, 15 dB HL seems to be a fairly well-supported clinical cutoff. This cutoff should be considered for the counseling and management of audiometric evidence of hearing thresholds outside the normative range. Further, this appears to be a reasonable cutoff for epidemiological and case-control studies for defining pure-tone hearing within normal limits versus hearing loss.