04/07/2025 | Press release | Distributed by Public on 04/05/2025 18:01
Achieving its potential
None of this, of course, will be straightforward.
While the technical knowhow to develop AI tools has progressed at almost breakneck speed, accessing the data to train these models can present a challenge. We risk being overwhelmed by the 'three Vs' of data - its volume, variety and velocity. At present, we're not using this data at anywhere near its full potential.
To become a world leader in driving AI innovation in healthcare, we will need massive investment from the UK government to enable researchers to access well-curated data sets. A good example of this is UK Biobank, which took a huge amount of foresight, effort and money to set up, but is now used widely to drive innovation by the medical research community and by industry.
Clinical data is, by its very nature, highly sensitive, so it needs to be held securely, and if researchers want to access it, they must go through a strict approvals process. Cambridge University Hospitals NHS Foundation Trust has established the Electronic Patient Record Research and Innovation (ERIN), a secure environment created for just this reason, with an audit trail so that it is clear how and where data is being used, and with data anonymised so that patients cannot be identified. It is working with other partners in the East of England to create a regional version of this database.
We need this to happen at a UK-wide level. The UK is fortunate in that it has a single healthcare system, the NHS, accessible to all and free of charge at the point of use. What it lacks is a single computer infrastructure. Ideally, all hospitals in the UK would be on the same system, linked up so that researchers can extract data across the network without having to seek permission from every NHS trust.
Of course, AI tools are only ever going to be as good as the data they are trained on, and we have to be careful not to inadvertently exacerbate the very health inequalities we are trying to solve. Most data collected in medical research is from Western - predominantly Caucasian - populations. An AI tool trained on these data sets may not work as effectively at diagnosing disease in, say, a South Asian population, which is at a comparatively higher risk of diseases such as type 2 diabetes, heart disease and stroke.
There is also a risk that AI tools that work brilliantly in the lab fail when transferred to the NHS. That's why it's essential that the people developing these tools work from the outset with the end users - the clinicians, healthcare workers, patients, for example - to ensure the devices have the desired benefit. Otherwise, they risk ending up in the 'boneyard of algorithms'.
Public trust and confidence that AI tools are safe is a fundamental requirement for what we do. Without it, AI's potential will be lost. However, regulators are struggling to keep up with the pace of change. Clinicians can - and must - play a role in this. This will involve training them to read and appraise algorithms, in much the same way they do with clinical evidence. Giving them a better understanding of how the algorithms are developed, how accuracy and performance are reported and tested, will help them judge whether they work as intended.
Jena's OSAIRIS tool was developed in tandem with Microsoft Research, but he is an NHS radiologist who understood firsthand what was needed. It was, in a sense, a device developed by the NHS, in the NHS, for the NHS. While this is not always essential, the healthcare provider needs to be involved at an early stage, because otherwise the person developing it risks building it in such a way that it is essentially unusable.