To my mind, “Artificial Intelligence” is a counter-productive term that has been banded around too liberally and has done much to hinder the adoption of new and incredibly useful decision assistance algorithms by the wider healthcare community.

To quote NHSX

“Artificial Intelligence has the potential to make a significant difference in health and care settings through its ability to analyse large quantities of complex information. We’re already seeing great applications of AI technology, but more needs to be done to fully harness its benefits and use AI safely and ethically at scale.”

Whilst I agree with the sentiment, I feel the terminology used is not particularly helpful.

In 2021, I am not aware of any functional example of true AI, however, Machine Learning is relatively common and almost ubiquitous. At this point in time, the technology required to simulate human thinking capability does not exist. What does exist are the advances in software, hardware and mathematics that enables a computer to learn from data. Thus the computer derives the algorithm and not the human being.

Some will argue there is also Deep Learning but to a layperson such as myself, this is just a multilayered form of Machine Learning. What it is not is AI.

The main problem of using the term AI is the quite reasonable reluctance to hand over the final decision-making process to the machine. For the foreseeable future, the use of Machine Learning in medicine is semi-automated i.e. it is used for decision assistance with the human making the final call. Most clinicians baulk at the suggestion that AI should be adopted citing that it is unsafe. AI might well be unsafe – if it existed!

The use of computers for decision assistance is not new, it is just that the techniques used to create the underlying algorithms have changed. Where once the algorithms were all hand-coded, now they can be created using machine learning techniques and large human-annotated datasets. Most clinicians using a software program built on a Machine Learning derived algorithm would be unaware, unless told, with the user interface give no hint to the underlying technology.

For safety purposes, medical software based on either Machine Learning or human-derived algorithms both undergo the same testing process prior to release using fixed reference data sets. It is an undeniable fact that modern Machine Learning algorithms frequently outperform those hand-coded by a human in terms of positive and negative predictive values.

Machine learning, therefore, offers the possibility of treating people more efficiently and with less cost. This is something the NHS realises it has to do if it is to cope with an increasing demand for its services from an ageing population.

Let’s use the correct term and perhaps the resistance to adoption might decrease?

Mark

P.S. In the past, I have overused the term AI myself.