Keeping Up with Moore’s Law

Keeping Up with Moore’s Law

Brian Taylor, Au.D.

Since the beginning of the digital hearing aid era more than 25 years ago, audiologists have witnessed remarkable progress in signal processing capability. Driven mainly by Moore’s law, a doubling of computer capacity every 12 to 18 months, hearing aid manufacturers launch a new platform at about the same pace as this decree would suggest: Every year and a half to two years, most manufacturers bring a more powerful chip to market with an array of new features and enhancements to old ones.

This incremental progress results in better sound quality, improvements in signal to noise ratio calculations and devices that are more cosmetically appealing – traits that expand hearing aid candidacy and improve wearer outcomes.

Today, this incremental progress is so common that we often take it for granted. But when you stop and think about it, a hearing device acquired in 2022 is much more sophisticated than one purchased in 1997. Not only do today’s hearing aids come in a range of stylish form factors, most have direct streaming capability and several features that automatically know the difference between quiet and noisy listening places.

Despite all this innovation, there are still limitations to how much any manufacturer can do with any new platform. That is, hearing aid manufacturers must make some tough choices about how their hearing aid can process sounds and what features are operating under the hood. Figure 1 represents the three broad, interconnected categories in which manufacturers develop and commercialize new features.

Let’s examine these three categories in more detail. First, core signal processing refers to the features in hearing aids that restore audibility of sound, improve the signal to noise ratio and listening comfort. Features involving gain, output and compression that are always working behind the scenes to shape and amplify sounds fall into this category. Core signal processing is the workhorse of the hearing aid. It is always operating, shaping sound into the wearer’s individual residual dynamic range.

Second, we have wireless connectivity. This is the technology used to wirelessly transmit sound directly into the hearing aid. Wireless connectivity is also used by a pair of hearing aids to communicate with each other. There are several different possible wireless transmissions within modern hearing aids, including telecoils, FM and Bluetooth. Transmission of signals via 2.4 GHz radio, the most common of which is Bluetooth, comprises much of this category. In addition, near-field magnetic induction can be used for audio transfer between hearing aids, which is an integral part of bilateral beamforming. Wireless connectivity benefits wearers by improving the signal to noise ratio of the listening situation and enables for easier use of smartphones and other listening devices.

Third, the category that has emerged most recently is personalization. The wearer’s ability to fine-tune their device using data from similar fittings or interact virtually with their hearing care professional comprises the personalization category. Many of the most recent innovations in hearing aids, involving machine learning, can be placed in this category. Some experts believe these machine learning insights, once they become more user friendly, will enable the rise of selffitting hearing aids.

This issue of Audiology Practices is devoted to recent developments from our hearing aid manufacturing partners. Each of the six leading hearing aid manufacturers was asked to contribute an article on a signature feature within their current product line. We were lucky enough to hear from five of them. Looking back at Figure 1, note how each manufacturer devotes considerable resources to one of the three categories, striving to bring meaningful, purpose-driven features to persons with hearing loss. As this issue demonstrates, it’s good to know we have innovative partners in industry. ■