High-end or basic hearing aids: does the technology level make a difference? Catherine Palmer shares the current evidence base and suggests where hearing healthcare professionals can make the most impact.
The most common complaint from individuals with mild-to-moderately severe hearing loss is the inability to hear in noisy situations. As consumer electronics have advanced, hearing aid users are more and more interested in connectivity as well. Hearing aid technology continues to develop at a rapid pace, with both signal processing advances and feature enhancements aimed at addressing these problems and goals.
In terms of signal processing, the focus continues to be on advanced noise reduction and directional microphone algorithms. Feature enhancement has focused on connectivity that allows the individual with hearing aids to take advantage of all the connectivity the smart phone provides other users through earphones.
The individuals pursuing hearing aids and health systems committed to providing hearing healthcare depend on the audiologist to sort through the evidence base to determine what new signal processing and features are valuable. Wu et al designed a study to examine the efficacy (can the signal processing provide benefit in ideal circumstances) and effectiveness (does the signal processing provide benefit in real-world situations) of advanced hearing aids compared to basic hearing aid technology [1].
“The differences between basic and advanced technology disappeared in real-world settings”
The investigators compared advanced directional microphone/noise reduction (DM/NR) circuitry to basic DM/NR in laboratory testing and ecologically valid testing, which included a smart-phone-based ecological momentary assessment (EMA) system. The primary difference between basic hearing aid technology and advanced hearing aid technology is the sophistication of the DM/NR with the sophisticated systems presumably providing increased user benefit.
Fifty-four older adults with mild-to-moderate sensorineural hearing loss participated in laboratory testing (speech understanding, listening effort, sound quality, localisation, and hearing aid satisfaction) and real-world testing through in-situ self-report (EMA) while wearing the different configurations for five weeks each. The order of configuration (basic technology vs. advanced technology with DM/NR on or off in each condition) was counterbalanced and the participants were blind to the condition. In the laboratory setting (efficacy), the advanced hearing aids out-performed the basic hearing aids in speech understanding and localisation measures. The differences between basic and advanced technology disappeared in real-world settings.
All participants had better results with the DM/NR turned on rather than off for both levels of technology, but the technology level itself did not matter in real-world settings. The authors did caution that the results might be different for a population who was more exposed to noisy situations or had more demanding communication needs in noisy situations. Of course, each manufacturer’s noise reduction and directional microphone responses differ and Figure 1 reminds the reader that the response of these signal processing features easily can be measured. Figure 1 illustrates the reduction of gain by frequency at four levels of noise reduction.
Figure 1. Real ear measurements of four levels of noise reduction.
These data assist the clinician in selecting the noise reduction setting and counselling the patient in terms of expectations. The same measurements can be obtained for different directional settings or comparing directionality on and off. Although not of interest in this study, the other difference between basic and advanced technology is number of independent channels. Research has shown that, for the purpose of returning audibility, approximately four channels are adequate but an increased number of channels may be valuable in managing feedback and providing the user with more gain before feedback in open fittings.
Wu et al’s findings add data to a large body of evidence that supports audibility as the primary treatment for hearing loss [1]. The audiologist’s essential role in hearing aid fitting is customisation, including the customisation of matching technology to individual needs, physical fit, and acoustic fit of the signal processing. Valente et al illustrate that average fittings provided to individuals when the manufacturer’s first fit based on audiological data is used is inadequate to provide audibility across frequency and input level [2].
“The primary difference between basic hearing aid technology and advanced hearing aid technology is the sophistication of the DM/NR”
For individuals with moderate-to-moderately severe hearing loss, the output of the hearing aid must be measured in the individual’s ear canal and adjusted to match evidence-based targets to ensure an audible signal is provided across frequency and input level, given that audibility is the primary predictor of performance in quiet and noisy communication situations.
The most recent data move the emphasis of expert-fit hearing devices from helping people hear better to directly impacting health outcomes [3] (healthy ageing in particular) and to promoting access to speech and language for our youngest patients, which impacts education and employment opportunities [4]. In a US population-based longitudinal cohort study, 2040 individuals over the age of 50 had cognitive performance measured every two years over 19 years, and new hearing aid use was identified along this time period. After controlling for a number of covariates (e.g. sex, age, education, marital status, wealth, smoking, drinking, physical activity, depression, etc.) the authors determined that hearing aid use had a mitigating effect on the trajectory of cognitive decline in later life. In other words, those who received hearing aids, regardless of many other covarying factors, had a less steep slope toward cognitive decline.
“The most recent data move the emphasis of expert-fit hearing devices from helping people hear better to directly impacting health outcomes”
Turning to our youngest patients, five-year results from the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study reported the primary factors influencing speech, receptive and expressive language, and psychosocial skills development (Figure 2).
Figure 2. Factors influencing the acquisition of better speech, receptive and expressive
language skills, and psychosocial skills in children with hearing loss.
The only one of these factors that otolaryngologists and audiologists can impact is the provision of cochlear implants or hearing aids at an earlier age. These data support the work and resources that it takes to create seamless transitions from newborn hearing loss identification to follow-up diagnostics and the provision of amplification if communicating through spoken language is the goal for the family and child.
Early, evidence-based treatment of hearing loss is good for children and adults. This care changes lives.
References
1. Wu YH, Stangl E, Chipara O, et al. Efficacy and Effectiveness of Advanced Hearing Aid Directional and Noise Reduction Technologies for Older Adults With Mild to Moderate Hearing Loss. Ear Hear 2018 [Epub ahead of print].
2. Valente M, Oeding K, Brockmeyer A, et al. Differences in Word and Phoneme Recognition in Quiet, Sentence Recognition in Noise, and Subjective Outcomes between Manufacturer First-Fit and Hearing Aids Programmed to NAL-NL2 Using Real-Ear Measures. J Am Acad Audiol 2018;29(8):706-21.
3. Maharani A, Dawes P, Nazroo J, et al. Longitudinal Relationship Between Hearing Aid Use and Cognitive Function in Older Americans. J Am Geriatr Soc 2018;66(6):1130-6.
4. Ching TYC, Dillon H, Leigh G, Cupples L. Learning from the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study: summary of 5-year findings and implications. Int J Audiol 2018;57(sup2):S105-11.
Declaration of Competing Interests: None declared.