Skip to content
The Future

If an AI-trained surgical robot kills a patient, who is to blame?

From cosmetic procedures to heart operations, the introduction of AI will create an ethical minefield.
a painting of a man and a woman sitting on a bench.
Credit: David S. Soriano / CC BY-SA 4.0 / Wikimedia Commons
Key Takeaways
  • Automated surgery is one of the many clinical areas where AI can be phased into current practice.
  • The intersection of AI with surgery is going to raise myriad ethical issues.
  • AI cannot replicate the decisions surgeons make based on the gut instinct that was gained from unquantifiable clinical experience.
Excerpted from Future Care: Sensors, Artificial Intelligence, and the Reinvention of Medicine by Jag Singh. ©2023. Reprinted with permission of Mayo Clinic Press.

Can AI be our holy grail? Can it fix all health care ills, increase the value of care, and improve diagnosis, risk prediction, and treatment while decreasing physician burden and improving the patient experience? 

There are many echelons within which AI can be phased into current clinical practice. One of them is automating surgery. In the past I have used magnetic navigation using Stereotaxis equipment to perform procedures. I could sit out in the console room, more than twelve feet from the patient, and maneuver the catheters within the heart using joysticks. The equipment enhances the precision of catheter movement and targets specific areas within the heart that could be causing the rhythm disturbance, while reducing my exposure to fluoroscopy (X-rays). 

The natural next step is the use of AI to automatically navigate the catheter to the region of interest and, after some confirmatory tests, deliver the heat energy to destroy the circuit and terminate the arrhythmia. We have already begun using holographic augmented reality technology (SentiAR) to get a real-time, interactive, three-dimensional reflection of the anatomy of the inner surface of the heart. Holographic images with automatic navigation of the catheters may sound fictional, but in reality, we’re almost there.

The intersection of AI with interventional and surgical specialties is going to raise a multitude of ethical issues. Most of these relate to bias and accountability. The ethical issues will get magnified as we drift into a completely autonomous mode of operations. This is of particular importance in AI-initiated responses or interventions that would be self-directed. The datasets used to train AI can be biased to start with. This will create ethical challenges, especially if the downstream impact is dissimilar in diverse patient subgroups. This could mean different interventions leading to different outcomes in different patients.

For the foreseeable future, robots will serve only to assist.

Let’s use cosmetic surgery as an illustration. AI can now predict an individual’s age by recognizing facial features that may have contributed to that evaluation. This in turn helps to suggest surgical procedural steps that could reduce the age by modifying those features. Cosmetic surgery is a big thing in South Korea. Surgeons use motion-sensor surgical instruments that collect data in real-time and guide the surgeon to make micro-adjustments to improve the outcome. But these AI algorithms have some inherent biases. In 2013, the Miss Korea pageant created a stir because of the similarity in facial features among those contestants who’d had cosmetic surgery. Beauty, they say, is in the beholder’s eye, and this gets even more complicated if that eye is being dictated by an artificial intelligence. It goes without saying that AI algorithms such as this will not be generalizable across a variety of communities and ethnicities.

In a surgical environment or in a procedural laboratory, there is a lot more at stake. An AI-trained robot either freezing because of a technical issue or going out of control during the procedure while dissecting, suturing, or manipulating catheters inside the heart can lead to a catastrophic outcome. The ethical issues will be directly proportional to the extent of AI engagement. When training robots, it would be important to train them with datasets of thousands of procedures performed in a variety of conditions at different sites with multiple operators. Avoiding harm is going to be key. And when that does occur, who is responsible? Will it be the company that developed the autonomously operating robot, the surgeon, the hospital, or the contributors to the dataset?

AI cannot replicate the decisions surgeons make based on gut instinct. The gut-guidance reflex is tough to encapsulate and replace, as much of it is gained from unquantifiable clinical experience. Also, a single complete surgical operation requires thousands of intricate steps, involving cutting, dissecting, excising, connecting, burning, cooling, clamping, ligating, and suturing. For the foreseeable future, robots will serve only to assist; as they become more facile with basic functions, additional layers of complexity will be added, but ever so carefully.


Related

Up Next