ProPrivacy is reader supported and sometimes receives a commission when you make purchases using links on this site.

Who should be liable for medical errors caused by AI?

Machine Learning Algorithms (MLAs) analyze vast amounts of data at lightning speeds. Data sets that were once too large for humans to properly evaluate can now be exploited to make life-saving medical decisions. The burning question is whether AI should be allowed to make those choices. And, if yes, how does it affect doctors, patients, and current legal and regulatory frameworks?

Experts at the Health Ethics and Policy Lab in Zurich, Switzerland, are just one group beginning to raise alarm over the use of AI. A recently published paper expresses concern that patients could be denied vital treatments due to biases within MLAs. 

The crux of the problem revolves around how MLAs are being developed. The paper suggests that automated systems have primarily been trained using data mined from male Caucasian patients. This “lack of diversity” can lead to biases that cause errors. As a result, marginalised groups may end up suffering from higher medical failure rates.

Another pressure point is created by existing human biases within the “neural inputs” exploited by MLAs. Those massive data sets create the potential for AI to mimic or re-express existing human biases. 

The kinds of biases that could potentially pass from humans to AI include prejudices toward high Body Mass Index (BMI)racial or ethnic groups, and gender discrimination.This is highly disturbing, because researchers are already suggesting that AI is capable of making life and death decisions. 

In the UK, researchers recently published a study in which AI correctly predicted premature mortality better than traditional methods. Researchers believe this could allow algorithms to make use of “demographic, biometric, clinical and lifestyle factors” to single out patients who would benefit from earlier intervention. However, any failure to pinpoint patients due to inherited biases could cause treatment to be withheld from particular groups.

Another study suggests that AI can successfully identify cancer patients who are at high risk of 30-day or 150-day mortality. According to that research, AI could be used to flag up patients before they receive expensive chemotherapy. The idea being that it may be better to allocate that costly treatment elsewhere. 

Research on Global Markets, which has conducted a study on medical robots, told ProPrivacy.com that “reports have suggested that cancer patients with severe bleeding have been recommended a drug that could cause the bleeding to worsen.” 

On another occasion, an AI algorithm designed to predict which patients with pneumonia could be safely discharged - incorrectly decided that patients with a history of asthma had a lower risk of dying. RGM told us:

“This was because it was true from the training data, as patients with asthma usually went to the ICU, received more aggressive care, and so were less likely to die. The algorithm did not understand this and used the rule that if someone had asthma they should be treated as an outpatient.”

Shailin Thomas, a research associate at Harvard University notes that “even the best algorithms will give rise to potentially substantial liability some percentage of the time.” This inherent potential for liability creates a puzzle, because it is difficult to comprehend exactly who should be held accountable for what is ultimately a guaranteed percentage of mistakes. 

Karl Foster, Legal Director at Blake Morgan, told ProPrivacy.com that, for the time being, clinicians will remain liable:

“Ultimately, clinicians are responsible for their patients; it is an overriding principle of the medical profession. Use of AI is unlikely to change that position, certainly in the short term”

“If we imagine AI interrogating test results and determining that a particular result increases the risk of developing a specific medical condition in a patient, ultimately - and currently - it is for the clinician to investigate further. The clinician will remain responsible for interpreting the data provided by AI in the light of other clinical information, and reaching a decision on the best treatment.”

Psychiatrist and data scientist Carlo Carandang, on the other hand, feels that liability could reside with manufacturers:

“AI apps will be treated as medical devices, so the performance of such clinical AI apps will be the responsibility of the companies that build them, and the FDA and other regulatory agencies that oversee such medical devices.”

Research on Global Markets (RGM) told ProPrivacy.com that although currently clinicians do appear to remain liable “in the event of harm being caused by incorrect content rather than improper use of an algorithm or device, then accountability must lie with those who designed and then quality assured it.” RGM notes that “ this line may not be so easy to define.”

Thomas is concerned that holding firms accountable could lead to them quitting producing the algorithms altogether. This could be extremely detrimental to the medical industry, because AI is already proving its potential.

In China, for example, researchers used an algorithm to detect brain tumors more successfully than the nation’s best physicians.These kinds of breakthroughs can save lives - but only if the firms that produce AI can do so without constant liability concerns.

Michael Carson, senior lawyer at Fletchers Solicitors believes that in the UK current legislation is fit to handle the emergence of medical AI. Carson told ProPrivacy.com that:

“We should view AI as just another piece of hospital equipment. Any errors or misdiagnosis made by the AI should be dealt with as a medical negligence claim, with the AI merely being a tool used by the hospital.

“The law is likely robust enough already to deal with issues stemming from AI malfunctions. In reality, AI can be seen as just another blend of equipment and software, which is already prevalent throughout the National Health Service.”

RGM, however, notes that current legislation may not sufficiently distinguish between “cases where there is an error in diagnosis malfunction of a technology” and cases caused by “the use of inaccurate or inappropriate data.” 

At the end of the day, AI can only act on the data it is given. If that data is incorrect or biased, before it is inputted - it is hard to understand how manufacturers can be at fault. On the other hand, it seems hard to blame medical professionals for decisions taken out of their hands.

Foster told ProPrivacy.com that current regulatory regimes in the US and Europe “do not currently anticipate machine learning where the software or data sets are designed to evolve.” As a result, questions surrounding liability are likely to evolve over time and regulators will need to remain flexible to change.

Who should be liable for MLAs is a complex issue, and there is already some disagreement. One things seems certain, due to the speed with which medical AI is emerging legislators need to be wary, and must act quickly to ensure regulations are prepared to cope. Too often, when new technologies emerge, breakthroughs hit the market prematurely and legislators are forced to play catch up. 

One of the biggest problems with AI is that clinicians do not always understand why MLAs are making decisions. This is because AI makes choices using massive data sets humans can’t process. RGM explains that due to improved success rates:

“Doctors may find themselves incorrectly justifying decisions made by AI because of a well-documented concept known as automation bias. Here, humans can have a tendency to trust a machine more than they might trust themselves.” 

This potential is extremely concerning, especially when experts are warning that algorithms may come pre-programmed with human biases that can cause malpractice. 

Written by: Ray Walsh

Digital privacy expert with 5 years experience testing and reviewing VPNs. He's been quoted in The Express, The Times, The Washington Post, The Register, CNET & many more. 

0 Comments

There are no comments yet.

Write Your Own Comment

Your comment has been sent to the queue. It will appear shortly.

Your comment has been sent to the queue. It will appear shortly.

Your comment has been sent to the queue. It will appear shortly.

  Your comment has been sent to the queue. It will appear shortly.

We recommend you check out one of these alternatives:

The fastest VPN we test, unblocks everything, with amazing service all round

A large brand offering great value at a cheap price

One of the largest VPNs, voted best VPN by Reddit

One of the cheapest VPNs out there, but an incredibly good service