AI is progressing apace. If you’re wondering how it might affect our working lives in ENT, read on for insights and a pilot study that show us what may be possible. The recent launch of ChatGPT, an open access artificial intelligence (AI) chatbot, has forced AI into the public spotlight. Media stories oscillate between the medical miracles that AI can perform and its adverse impact on the job market, personal privacy and security.
Meanwhile, the AI healthcare market size has surged from 11 billion USD in 2021 to over 20 billion today, projected to reach 188 billion by 2030. In this rapidly evolving technological landscape, evidence-based discussions around deployment of AI in healthcare are crucial. This article aims to give a broad overview of AI and its potential applications in otolaryngology, along with a specific real-world use case.
Overview of AI
The concept of AI dates back to the 1950s, when Alan Turing questioned the possibility of the intelligence of a machine in his article ‘Computing Machinery and Intelligence’. In simple terms, AI refers to computer algorithms designed to perform complex tasks (Figure 1).
Figure 1: The spectrum of AI. Adapted from the FPF AI Infographic [4].
In the last seven decades, AI technology has weathered booms and winters [1], with a steady advance in healthcare-related technology.
AI in otolaryngology
Applications of AI in healthcare can be broadly categorised into clinical decision making (through decision support systems) and supportive roles (ranging from predicting individual patient needs to streamlining processes and managing service capacity), with a potential for highly individualised treatment planning [2].
The following are a few potential applications across otolaryngology, which are just the tip of the ‘AI’ceberg of applications that have been developed [3]:
- Interpretation of images/videos – automated diagnostics across subspecialties, development of tele-endoscopic services, intraoperative decision making, training and education.
- Sound/voice analysis and pattern recognition algorithms – automated diagnosis of sleep and voice disorders, providing deeper insights into the pathophysiology of phonation, and improving hearing rehabilitation technology.
- Natural language processing algorithms – automated patient communication tools, ambient medical scribes for varied healthcare settings.
- Multimodal algorithms – disease screening, triaging patient priority, phenotypic stratification, treatment prognostication, identification and mitigation of effects of racial and socioeconomic variables.
The big picture
Let’s envision how these can come together to transform the care pathway for one of the most common conditions we treat as ENT surgeons.
Zahra*, a three-year-old refugee from a war-torn region, lives in a community where her family don’t speak the local language. They are provided with an inobtrusive earpiece that functions as a real-time universal translator, allowing effortless communication with those around them.
Her parents are worried about her gasping in her sleep and alert their community liaison. They are signposted to a mobile application that analyses Zahra’s sleep sounds, and flags up a potential diagnosis of obstructive sleep apnoea. The community health AI chatbot converses with the family to organise an appointment with the ENT/sleep service alongside a diagnostic home limited-sensor sleep study. This picks up severe sleep apnoea, and the hospital AI sends tailored information about the condition to the family in their language. During her consultation it is noted that Zahra has enlarged tonsils, with an in-depth discussion following around management; this is automatically documented by the ambient scribe, getting added to her clinical notes and sent through to the family as they step out of the clinic, along with further tailored information around management options. The family, after further discussion at home, remotely consents for surgery, and the hospital AI schedules the procedure with appropriate urgency. Following surgery, Zahra goes home the same day with inobtrusive sticker biosensors that allow the clinical team to monitor her remotely through their hospital-at-home dashboard.
*AI-generated patient for this article.
Barriers to clinical deployment
This utopian view provides a glimpse into what is possible with algorithms that already exist and have been experimentally validated. There is, however, a noticeable chasm between the experimental validity of AI models and their real-world clinical deployment. Some of the concerns that contribute to this include limited access to bias-free diverse data sets and outcome measures, lack of conformity in validation techniques, and use of different performance measures for benchmarking. In addition to concerns around clinical validity, there is a diverse ethico-legal landscape that spans the above chasm. The historical lack of clinical expertise in, and understanding of, the working and development of algorithms has, perhaps, widened this chasm further.
The way forward
With the recognition of this disparity in advancements in technology and their applicability in healthcare, the global community has taken several steps to help overcome some of the barriers to clinical deployment. Recent regulations from the FDA and MHRA, aimed specifically at AI software as a medical device, hope to provide a framework for developing clinically robust algorithms and address the ethical and legal concerns around their deployment. Fellowships in clinical AI now offer a structured pathway for interested clinicians to upskill and develop expertise in clinical deployment of AI technology, with a view to developing a clinical workforce that is well-equipped to dispel the fog surrounding AI and lead the reimagination of our specialty’s future.
Collaborating with an AI company, a case study: Ufonia & St George’s NHS Trust
The clinical problem
An increasing number of patients are being referred into secondary care with suspected head and neck cancer (HNC), the overwhelming majority of which do not have cancer. Individual trusts are being overwhelmed to meet this referral demand and patients on these pathways are often seen in busy crowded clinics.
During the Covid-19 pandemic, the governing bodies of H&N surgeons recommended telephone triage using a validated risk calculator (HaNC-RCv2), a set of symptom-based questions, to stratify patients into high or low probability of having cancer [5].
Ufonia
Ufonia, a digital health company, uses artificial intelligence to automate routine clinical conversations. They do this through their clinical voice assistant ‘Dora’, a UKCA/CE Class 1 approved medical device, which combines clinical evidence and AI to automate conversations via a natural-language telephone call. They were approached by St George’s H&N department to help develop a telephone conversation that would integrate a validated risk calculator for triaging suspected HNC patients. Funding for the project was sourced from two Small Business Research Initiative (SBRI) grants to firstly develop a prototype AI-led conversation and subsequently to pilot trial the conversation at St George’s.
Patient ‘co-creation’
A patient-public involvement (PPI) lead was employed to lead PPI activities. These consisted of two round table discussions and individual technology trials. Patients self-volunteered from a HNC cancer charity. Round table discussions encompassed perspectives on the AI technology and its use in HNC triage. The final conversation was iteratively developed and refined based on patient feedback.
Technology pilot
The project was run as a departmental quality improvement project. Standard approvals including clinical safety, information governance and data protection agreements were sought and signed off prior to commencement of the project. Twenty-nine consented patients undergoing telephone triage also had a clinician-supervised AI-triage using ‘Dora’. All calls were successfully completed, with an average agreement between the clinician and Dora of 89%. The technology was highly acceptable to patients, with a median net promoter score (NPS) score of 8/10.
Conclusion
AI is revolutionising the way we both develop and deliver healthcare. This is a seminal moment in this history of healthcare advancement; we are moving to an era where groundbreaking changes are no longer the remit of eminent academics in lauded institutions. AI technology is increasingly becoming freely available, accessible and in the pocket of every doctor and patient. Furthermore, digital companies are actively looking to engage and collaborate with innovatively thinking clinicians and hospitals to help develop digital solutions to real-world clinical problems.
Progress in the beautifully complex world of healthcare tends to follow a long and arduous path taken in measured steps – find your AI tribe to support the journey!
References
1. Anonymous. Is the Next Winter Coming for AI? The Elements of Making Secure and Robust AI. Trustworthy and Socially Responsible Machine Learning (TSRML 2022).
https://openreview.net/pdf?id=FsTfsV018-
2. Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism 2017;69S:S36–40.
3. Tama BA, Kim DH, Kim G, et al. Recent Advances in the Application of Artificial Intelligence in Otorhinolaryngology-Head and Neck Surgery. Clin Exp Otorhinolaryngol 2020;13(4):326–39.
4. Leong B, Jordan SR. The Spectrum of Artificial Intelligence: Companion to the FPF AI Infographic. 2021.
https://fpf.org/wp-content/uploads/2021/
08/FPF-AIEcosystem-Report-FINAL-Digital.pdf
5. Tikka T, Kavanagh K, Lowit A, et al. Head and neck cancer risk calculator (HaNC-RC)-V.2. Adjustments and addition of symptoms and social history factors. Clin Otolaryngol 2020:45(3);380–8.
All links last accessed April 2024.
Declaration of competing interests: KM is employed by Ufonia.