Emergency Medicine - AI based ECG interpretation

Artificial Intelligence
Rob Brisk
June 18, 2023

AI in the Emergency Department - past, present and future

Since Willem Einthoven stuck his hands and feet into buckets of salty water and began recording the electrics of the heart at the beginning of the 20th century, the ECG has become an indispensable part of emergency medicine.

But despite the fact that we’ve known about some of the most important ECG diagnoses for over a hundred years - for example, Fred Smith first described a STEMI back in 1918 when he took ECG recordings while tying off the coronaries arteries of dogs - ECG interpretation remains a stumbling block for many healthcare professionals.

Researchers have been trying to help us with this problem since the 1950s when Hubert Pipberger founded a lab dedicated to computerised ECG interpretation. These days, virtually every ECG machine in a modern hospital will offer an automated diagnosis with each recording. These can certainly be helpful for flagging up major abnormalities for staff who don’t do a lot of ECG interpretation. But even after half a century of development, computerised ECG interpretation is notoriously prone to false positives, and woe betides anyone who tries to refer to cardiology based on an automated read-out.

The limits of traditional computers

So why is that? Well, it boils down to a simple truth about computer programming. Namely, if you can’t explain how to do something, you can’t program a computer to do it.

For example, take the below pictures of dogs and blueberry muffins below. As a human, you can immediately tell the difference. But try to explain how, and you’ll probably struggle. The fact is, we’re extremely good at making sense of visual input, but we do most of our processing subconsciously.

For that reason, the field of “computer vision” hit something of a wall in the early noughties. No matter how many clever tricks they used, computer vision researchers just couldn’t get anywhere near human-level performance. This wasn’t just frustrating for the researchers. It put the kibosh on a range of enormously impactful applications like self-driving cars, medical image analysis, manufacturing defect detection… and, of course, ECG interpretation.

You might not think of ECG interpretation as a computer vision problem. After all, we often make ECG diagnoses using conscious logic, e.g. “if every QRS complex is preceded by a P wave then the rhythm is sinus”. However, that’s just the easy bit. The bit we don’t think about is discerning P waves and QRS complexes from what is, to most people, a bunch of wiggly lines on a page. Our brain just gets on and does most of the hard work for us, much like telling chihuahuas from blueberry muffins. For a computer, though, this can be really difficult, especially with noisy ECGs.

The ImageNet moment

But that all changed in 2012, after something known as the “ImageNet moment”. ImageNet is a huge database of images that have been labelled according to which objects they contain. For example, the picture below would be labelled “aeroplane”. It was created and released in 2009 by a group at Princeton to provide a standardised benchmark for computer vision researchers worldwide. Every year, there’s a global competition where researchers can submit their algorithms for evaluation on a secret subset of ImageNet pictures to see how they measure up.

In 2012, a team from the University of Toronto blew everyone else out of the water with a machine learning algorithm they named "AlexNet". As it turns out, you can’t tell a computer how to make sense of images using conventional programming, but if you use machine learning you can show it instead. (See one of our previous blog posts for a brief intro to machine learning.) The Toronto team demonstrated this conclusively using a type of “digital brain” known as a convolutional neural network.

Just a few years later, a team from Stanford made waves with a Nature Medicine paper where they claimed cardiologist-level detection of AF using very similar technology to AlexNet. That might sound like hype - and thought leaders in emergency medicine like RCEM are certainly taking a sceptical approach to AI - but the FDA has recently approved AI algorithms for smartphone-based ECG recorders, loop recorders and even smartwatch devices. The number of publications on AI-based ECG interpretation is growing exponentially, so we can expect many more applications to come to market soon.

Emergency Department impact

AI-based ECG interpretation in the emergency department will be impactful in its own right. After all, cardiovascular disease is still the leading cause of death worldwide and the missed case rate of treatable, life-threatening conditions like STEMI remains uncomfortably high. If AI can make even a small dent in this, it could save many lives.

But what makes ECG AI particularly interesting is that it’s set to be one of the first types of clinical AI to go mainstream in emergency medicine. While the benefits may be substantial, implementing AI in a clinical setting isn’t always straightforward. In particular, AI algorithms notoriously suffer from a “black box” effect, meaning that it’s impossible to open them up and examine their inner workings the way you can with a conventional computer program.

The trouble with black-box algorithms is that it’s difficult to anticipate how and when they might go wrong. That makes it much harder to guard against particularly high-impact errors compared with conventional software. Google gave a very public illustration of this when one of its image recognition algorithms blindsided its creators by confusing a young couple of Afro-Caribbean heritage with a pair of gorillas. Tesla provided a much more tragic demonstration when one of their cars failed to spot a white lorry against a bright sky while in autopilot mode.

The future of ECG interpretation

It doesn’t take a huge leap of imagination to foresee the potential pitfall of deploying AI at the point of care. In the world of ECG interpretation, conventional algorithms are expressly designed to have a very low false negative rate for serious diagnoses. The inevitable trade-off for this is a higher false positive rate, which can cause unnecessary referrals and resource consumption. But the error distribution is predictable, and that’s really important in a clinical setting.

AI-based ECG algorithms are likely to (eventually) provide higher overall accuracy, but it might be much harder to predict when they’ll go wrong. As a clinical community, we’ll need to decide how we manage this safely. We’ll also need to be mindful of the fact that AI algorithms are much more prone to bias because certain ethnic and socioeconomic (particularly affluent caucasian) groups tend to be overrepresented in the research datasets used to train them.

It might be a challenging road at times, but hopefully, it will pay big dividends in the long term as we move into an era of AI-enabled, 21st-century healthcare.

Luckily for us here at Eolas, the black box effect is much less of a problem for our AI use cases. To find out more about how we’re using AI to make sense of medical evidence and clinical guidance, check out our blog series on AI-enhanced medical search. Otherwise, thanks for reading, and don’t forget to visit our Eolas blog homepage for future posts!