Boosting automation in robotic surgery with AI

AI has a huge role to play in realising more sophisticated forms of automation in robotic surgery.

Ross Law November 19 2024

With a global physician shortage, populations that are both growing and ageing, and the suggestion that caseloads are continuing to rise, a healthcare crisis is looming.

In the past three decades, surgical procedures have begun to move on from open surgery towards ones that are less invasive, with procedures like laparoscopy and endoscopy being performed, resulting in appreciable patient benefits including less blood loss, faster recovery times, and less scarring.

A clinical trial by University College London (UCL) and the University of Sheffield found that robotic surgery reduced the chance of readmission by more than half and resulted in a 77% reduction in prevalence of blood clots when compared to patients who had open surgery.

Now, research indicates that even for straightforward procedures, the use of robotic surgery is increasing, a fact that has been associated with a decrease in traditional laparoscopic minimally invasive surgery.

However, with the rise of robotic surgeries, procedures are becoming more complex, and harder for surgeons to perform independently. To alleviate this and other pressures, more assistance is needed in operating rooms (OR), and this is why further automation in surgical robotics systems is of growing importance.

With artificial intelligence (AI) becoming more sophisticated in recent years, the technology is rapidly intersecting with surgical robotics – a market GlobalData forecasts will reach a valuation of over $10bn globally by the end of this year – and holds the potential to drive further autonomy in the field.

As the market continues to grow, there is the potential for AI to further automate processes in robotic surgery, to alleviate workload on physicians, reduce the time spent on the most complex surgical procedures, and improve patient care.

Intuitive Surgical’s da Vinci surgical robot, the leading player in the robotic surgical systems market in the US, as per GlobalData’s 2023 Market Model methodology, works with a “master-slave” relationship, where the robot does not have full autonomy but is controlled by the surgeon through a console, the images viewed through a stereoscope as procedures are undertaken.

This paradigm is gradually being iterated upon. Specifically, foundation models (FMs), deep learning neural networks which can be thought of as the next frontier in AI, hold the potential to open the door to further process automation and assistance for surgeons during robotic surgery.

Experts in the field of robotics from Johns Hopkins University (JHU) recently took part in a webinar to share insights into work they are undertaking that utilises AI to further automation in surgical robotics systems.

The robot-surgeon partnership

According to Russell H. Taylor, a John C. Malone professor in the Department of Computer Science at JHU, in dealing with surgical robots, complementary capabilities between human and machine are really what is being dealt with.

“We have some common capabilities, but machines are good at things that we are less good and vice versa,” Taylor says.

“We want this partnership to achieve the best of both worlds, and it’s important to realise that the robot is only one part of an intervention; you really are dealing with an information infrastructure and a physical infrastructure in the OR and throughout the entire treatment process and the hospital.”

In terms of the relationship between AI and robotics, Taylor’s view is that there are two factors to consider: the way in which humans can tell the machine what it is supposed to do in a way that the machine can understand, and how humans can be sure the machine is able to do what it has been told to do, and will not do something it has not been told to do throughout a surgical procedure.

Taylor notes that the current paradigm in robotic surgery is for a surgeon to manoeuvre a robot using handles while watching the operative outcomes through a stereo endoscope.

“But all of the knowledge and planning beyond that very simple task specification and execution is in the surgeon’s head.”

The remark leads into Taylor’s point that the emergent paradigm in robotics, with the rise of AI, is in being to take greater advantage of the fact a computer is able to assist in many ways between the physician, the robots, tools, and the patient.

He explains: “Surgeons can still do all of the hand over hand control, but they can also begin to ask the robot to provide them with more information, from sensors to the enforcement of safety barriers during surgeries.

“For this to work, what is crucial is that the computer controlling the robot and the physician need to have some shared situational awareness of what is going on with the patient, the tools, and the system, and what the task is.”

Taylor’s prevailing view is that robots in surgery have to be thought about in terms of what a surgeon is trying to do, and the information and physical environment in which the robot is providing assistance within.

“We can improve consistency, safety and quality, but in the end, for all of this to be valuable, you need whatever technology or autonomy is available to solve clinical problems to result in better clinical outcomes or more cost-effective processes.”

The road to complete autonomy in robotic surgery

In 2022, a team at JHU published research on the Smart Tissue Autonomous Robot (STAR) that performed the first laparoscopic surgery, without any human help, on pig models.

At the time, senior research author Axel Krieger, assistant professor of mechanical engineering at Johns Hopkins' Whiting School of Engineering, said the team’s findings showed it was possible to automate the reconnection of two ends of an intestine – one of the most intricate and delicate tasks in surgery. In performing this procedure, STAR produced significantly better results than humans performing the same procedure.

More recently, Krieger’s team has been looking at an AI-based learning approach, facilitated through JHU’s transformer, that can improve with more data.

Krieger likens the transformer to the backbone architecture used in large language models (LLM) like ChatGPT.

He explains: “In our case, we are using robotic action as an output, and learning how to perform fundamental surgical tasks like lifting tissue, needle pickup, and knot tying.”

This research, which JHU recently presented at this year’s CoRL (Conference on Robot Learning) conference, which took place in Munich, Germany from 6-9 November, centres around ‘imitation learning’.

“The transformer learns by watching humans do procedures. We do different demonstrations of surgical sub-tasks and give those to the transformer learning model, which can then, fully autonomously, execute on them.”

Krieger says that STAR achieved complete autonomy from being shown around 500 demonstrations of procedures including knot tying or picking up a needle.

“What's also exciting is that STAR is robust to retry behaviour, so if something goes wrong – such as a tool getting knocked out of STAR’s gripper – the architecture recovers and continues to perform, for instance, knot tying, without any error.

“Of course, this is all on kind of a suture pad level, and not yet clinical. What we've been continuing to explore over the last couple of months is whether this architecture works for real surgery.”

Foundation models: the X factor in autonomic advancement?

Along with recent advancements in deep learning and AI capabilities, foundation models are emerging as key drivers of automation in surgical robotics because they allow for more flexible task formulations and data inputs.

Foundation models are equipped with AI-based deep learning capabilities and can be trained on vast and broad, or specific, datasets and applied to a wide range of use cases.

“Foundation models unlock the power of AI analytics for variable tasks that we would otherwise have needed to train and develop specific models for,” says Mathias Umberth, John C. Malone associate professor at JHU’s department of computer science.

Since 2017, Umberth and his team have been working on a foundation model, with a particular focus on X-ray image analysis in guided surgery, to develop paradigms and frameworks that can be used to fully automate the generation of surgical training data, in-computer, to scale the data generation pipeline.

“This paradigm has been enabling us to generate immense amounts of perfectly annotated training data that documents surgical processes, some of them old and that we already perform in surgery, and some of them new ones that we would like to be able to perform in the future, and use this data to generate sophisticated AI models that we can then use to analyse intraoperative data and drive automation,” Umberth explains.

For surgical application, Umberth’s team has developed a foundation model called FluoroSAM, which interacts with X-ray images and can segment arbitrary structures in those images.

“We've been using this model to drive automation in surgery, and along with a language model, we have essentially built a fully autonomous system of a robotic X-ray device, for orthopaedic and endovascular applications.”

On a screen, this system can visualise requests made by a clinician during surgery. In practice, a surgeon may request a view of the right femur. From here, the system will automatically interpret the prompt, interpret the image, and move to the corresponding location on a patient. The system can also respond to other prompts like a surgeon’s request for visualising the segmentation of a muscle.

“These systems that now can leverage these foundation models as the back end to analyse complicated images and act on top of them, are really going to be one core enabling factor for driving the adoption of autonomy.”

In closing, Umberth says that his team is interested in determining how the rise of autonomy will affect the responsibilities of surgeons and OR staff in the reality of operating rooms 10-15 years when, in all likelihood, all ORs will at least partly consist of autonomous systems.

“This is not simply about how we can build and enable technology that is autonomous and can achieve and perform at the level that we need in order to make patients healthier, but we also need to think about how the introduction of this type of technology changes the overall ecosystem that is healthcare.”

Robotics and automation in surgery seem inevitable. In the quest for less invasive surgical procedures, surgical robotics are having an impact on procedures both straightforward and complex. While full automation is not quite a reality and may not even be the desirable outcome moving forward, for now, a symbiotic relationship between human and robot in the OR appears to be the best way forward. Robots may be a long way off from replacing a skilled surgeon, but in the future OR, it appears certain that surgical robotics systems will augment the field of surgery, to enable surgeons to continue doing what they do best, only with greater efficiency and insight.

Uncover your next opportunity with expert reports

Steer your business strategy with key data and insights from our latest market research reports and company profiles. Not ready to buy? Start small by downloading a sample report first.

Newsletters by sectors

close

Sign up to the newsletter: In Brief

Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Thank you for subscribing

View all newsletters from across the GlobalData Media network.

close