AI is ‘better than doctors’ at predicting when patients are going to die – outstripping mystified medics in heart tests
- In a mystery diagnoses the machines are able to detect ECG abnormalities
- They accurately predicted which patients were going to die within a year
- Finding abnormalities in ECGs three cardiologists had diagnosed as normal
Artificial intelligence can accurately predict how likely someone is to die within a year by looking at heart test results – even when doctors believe the person is healthy.
In a mystery diagnoses the machines are able to detect abnormalities in electrocardiogram (ECG) results and predict who is at higher risk of dying.
The artificial intelligence was given 1.77 million ECG results from 400,000 people to examine by Dr Brandon Fornwalt at healthcare provider Geisinger in Pennsylvania, US.
Two tests were carried out with one machine taught to only examine the raw data from the ECGs while the other was fed information about the people it related to – their sex, age. (stock image)
Two tests were carried out with one machine taught to only examine the raw data from the ECGs while the other was fed information about the people it related to – their sex, age.
To measure the AI’s performance they tested its ability to distinguish between two groups of people – those who had died within a year of taking the tests and those who survived.
WHAT IS AN ECG?
An ECG, an electrocardiogram, traces the electrical activity of the heart and can help Doctors to detect abnormal rhythms that may occur due to heart attack or heart conditions.
Miraculously when asked to predict which people were more at risk of dying within the next year the machines guessed very accurately.
When one is a perfect score matching every ECG with the correct outcome, the machine looking at the raw data alone scored eight point five, achieving a clear distinction between the groups – five is no distinction at all.
Surprisingly the machine that had been given additional information such as the patients sex and age did not do as well.
While still achieving between six and eight it seemed the machine was less able to measure the likelihood that the patients would die, when basing its predictions on information that doctors already use – such as age and sex.
Dr Fornwalt told the New Scientist: ‘No matter what, the voltage-based model was always better than any model you could build out of things that we already measure from an ECG.’
The machine was able to detect abnormalities in ECGs that three trained cardiologists had determined were normal and nothing to worry about.
Introduction of AI in such situations could see the rise of the superhuman doctor
Dr Fornwalt added: ‘That finding suggests that the model is seeing things that humans probably can’t see, or at least that we just ignore and think are normal.
‘AI can potentially teach us things that we’ve been maybe misinterpreting for decades.’
Introduction of AI in such situations could see the rise of the superhuman doctor – however it is not known what rhythms the AI has detected, which makes the unexplained diagnosis a little unethical in some doctors opinions.
The findings will be presented at the American Heart Association’s Scientific Sessions in Dallas, U.S, on November 16 – with researchers hopeful that they will be able to prove its significance with clinical trial.
HOW DOES ARTIFICIAL INTELLIGENCE LEARN?
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.
ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.
Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images
Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.
The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge.
A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other.
This approach is designed to speed up the process of learning, as well as refining the output created by AI systems.