Share This

 

A revolutionary new otoscope is using artificial intelligence to dramatically improve access to ear and hearing care in South African outreach communities. Carolina Leal, spoke to Professor De Wet Swanepoel of the University of Pretoria about how his team developed the new device.

 

 

Prof Swanepoel, thank you for taking the time to answer some questions. Could you tell us a bit about your background and how you got involved in the development of the hearScope?

The hearScope project started with an unusual combination of an audiologist (myself), a computer engineering (my colleague, Dr Herman Myburgh) and an ENT (Prof Claude Laurent from Umea University, Sweden). As a research audiologist, my primary interest is in improving access to ear and hearing care in underserved populations. Based on my experience in South Africa and across sub-Saharan Africa, where there is typically less than one ear specialist for every million persons, we wanted to address the tremendous need for accurate diagnosis of ear disease for preventative care. Our research capitalises on the growth in information and communication technologies to explore, develop and evaluate innovative service delivery models and applied solutions.

What is the hearScope? How does it work?

The hearScope is a smartphone video-otoscope that employs artificial intelligence (AI) to support the diagnosis of ear infection (www.hearscope.io). The high-quality variable magnification otoscope ‘pen’ connects to a smartphone running the hearScope application. Users with minimal training can take images or videos of the ear canal and eardrum. Recorded media can be uploaded to a secure cloud-based data management system for archiving, expert opinions or an AI supported diagnosis. The AI system employed image-analysis techniques and machine learning to categorise five of the most common ear conditions, namely normal, wax obstruction, acute otitis media (AOM), otitis media with effusion (OME) and chronic suppurative otitis media (CSOM). The AI supported diagnosis is sent back to the mobile device once determined on the cloud-based service.

“Our research capitalises on the growth in information and communication technologies to explore, develop and evaluate innovative service delivery models and applied solutions.”
Why did you come up with the hearScope idea?

Earlier work we conducted with Swedish colleagues investigated video-otoscopy performed by trained laypersons in primary healthcare settings to improve access to diagnosis using an asynchronous telemedicine approach [1-3]. While the findings were promising, there were two main problems. Firstly, video-otoscopes are expensive and require laptops and secondly, finding remote specialists to diagnose images for appropriate treatment recommendations is challenging. The hearScope project commenced four years ago to address these barriers alongside our work utilising smartphone technologies for hearing assessment.

How is the hearScope different from other video otoscopy instruments currently available on the market?

The hearScope is developed to connect to any smartphone as a video-otoscope. It is a low-cost solution that can deliver high-quality images of the eardrum and can be used by a layperson with minimal training. Most revolutionary, however, is the fact that the hearScope offers AI supported diagnosis as an option. This integrates with a secure data management system that forms part of the larger smartphone and cloud-based hearing assessment portal by the hearX group (www.hearxgroup.com).

 

 

Can you explain how the artificial intelligence and machine learning feature of the hearScope works? How reliable is it to diagnose ear diseases?

Our first image analysis classification system, published in a Lancet journal [4], extracted visual features using tailor-made feature extraction algorithms, which were then classified using a decision tree. The diagnostic accuracy to correctly classify between normal, wax obstruction, AOM, OME and CSOM was 81%.

In a recent paper, we redesigned the system to a neural network using an improved classification option that increases accuracy to 86% [5]. This performance compares well with the classification accuracy of general practitioners and paediatricians (~64% to 80%) using traditional otoscopes. The artificial intelligence system positions itself to be the leading AI diagnostic and analysis tool in the mHealth sector for hearing healthcare.

“Users with minimal training can take images or videos of the ear canal and eardrum.”

Field trials are currently underway to expand the range of hearScope images with confirmed diagnosis by at least three specialists to improve accuracy and ensure that variables such as poor focus and partial imaging can also be isolated.

Is the hearScope compatible with all current smartphone technology and different software?

Currently, the hearScope is compatible with a predetermined list of approximately 65 Android devices. We’re working on iOS compatibility due to the hardware and software registration processes with Apple. These should be completed in 2018.

How is the data obtained from the hearScope saved? How do you ensure confidentiality of patient health records?

Images taken by the hearScope can be saved locally on the phone or on our cloud-based Electronic Health Record (EHR) system that makes information available instantly and securely to authorised users. The user can view hearScope images in the cloud on any computer, with secure access – online or offline. Our central data management platform is secured with health compliant 256bit AES encryption of data, ensuring the necessary protection of client data (www.hearxgroup.com/mhealth).

How do you keep the instrument sterile to prevent cross-infection when you are working in the field? How durable is the hearScope?

The hearScope is supplied with various size reusable specula (3, 4 and 5mm). These specula can be used as disposables or can be sterilised to avoid cross-contamination. The hearScope comes with a one-year swap-out warranty.

“The artificial intelligence system positions itself to be the leading AI diagnostic and analysis tool in the mHealth sector for hearing healthcare.”
What has been the impact of the hearScope so far and how do you intend to develop it in the future?

The hearScope was launched on the Indiegogo crowdfunding campaign in August to involve persons globally in the social impact nature of the solution. Interest exceeded all expectations, with the funding goal reached by more than 160% (www.indiegogo.com/projects/hearscope-next-generation-smartphone-otoscope). We will have to wait and see what the long- term impact is. We’re excited, however, to see how hearScope can support affordable and preventative access to ear care globally. Where can clinicians purchase the hearScope? The hearScope will be available for direct purchase from the hearX Group or via the website – retailing for less than $200 per device

 

References

1. Biagio L, Swanepoel D, Laurent C, Lundberg T. Video-otoscopy recordings for diagnosis of childhood ear disease using telehealth at primary health care level. Journal of Telemedicine and Telecare 2014;20(6):300-6.
2. Lundberg T, Biagio L, Laurent C, et al. Remote evaluation of video-otoscopy recordings in an unselected pediatric population with an otitis media scale. International Journal of Pediatric Otorhinolaryngology 2014;78(9):1489-95.
3. Lundberg T, Biagio de Jager L, Swanepoel D, Laurent C.. Diagnostic accuracy of a general practitioner with video-otoscopy collected by a health care facilitator compared to traditional otoscopy. International Journal of Pediatric Otorhinolaryngology 2017;99:49-53
4. Myburgh HC, van Zijl WH, Swanepoel D, et al. Otitis media diagnosis for developing countries using tympanic membrane image-analysis. EBioMedicine 2016;5:156-60.
5. Myburgh HC, Jose S, Swanepoel D, Laurent C. Towards low cost automated smartphone- and cloud-based otitis media diagnosis. Biomedical Signal Processing and Control 2018;39:34-52.


Interviewed by Carolina Leal.

 

‘Spotlight on Innovation’ is an informative section to provide insight and discussion on recent advances in technology and research and does not imply endorsement by ENT and Audiology News.

Share This
CONTRIBUTOR
De Wet Swanepoel (Prof)

PhD, Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa. Research Director, WHO Collaborating Centre for the Prevention of Deafness and Hearing Loss

View Full Profile
CONTRIBUTOR
Carolina Leal

MSc, UCL Ear Institute, 332 Grays Inn Road, London, WC1X 8EE; Audiological Scientist, Cochlear Implant Programme, The Portland Hospital.

View Full Profile