The World Health Organization (WHO) has recently outlined six considerations for the regulation of artificial intelligence (AI) in healthcare. The key regulatory considerations released by the WHO touch on the importance of establishing safety and effectiveness in AI tools, making systems available to those who need them, and fostering dialogue among those who develop and use AI tools.
The WHO recognises the potential of AI in healthcare, as it could improve existing devices or systems by strengthening clinical trials, improving diagnoses and treatment, and aiding the knowledge and skills of healthcare professionals. AI has already improved several devices and systems, and there are so many benefits of AI. However, there are risks too with these tools and the rapid adoption of them.
AI technologies are and have been deployed quite quickly, and not always with a full understanding of how they will work in the long run, which could be harmful to healthcare professionals or patients. AI systems in medical or healthcare often have access to personal and medical information, so there should be regulatory frameworks in place to ensure privacy and security. There are a number of other potential challenges with AI in healthcare, such as unethical data collection, cybersecurity risks, and amplifying biases and misinformation.
A recent example of biases in AI tools comes from a study conducted by Stanford University. The study results revealed that some AI chatbots provided responses that perpetuated false medical information about Black people. The study ran nine questions through four AI chatbots, including OpenAI’s ChatGPT and Google’s Bard. All four of the chatbots used debunked race-based information when asked about kidney and lung function. The use of false medical information is deeply concerning and could lead to a number of issues, including misdiagnoses or improper treatment for Black patients.
The WHO has released six areas for regulation of AI for health, citing a need to manage the risks of AI amplifying biases in training data. The six areas for regulation are:
- To foster trust, the publication stresses the importance of transparency and documentation, such as through documenting the entire product lifecycle and tracking development processes.
- For risk management, issues like intended use, continuous learning, human interventions, training models, and cybersecurity threats must all be comprehensively addressed, with models made as simple as possible.
- Externally validating data and being clear about the intended use of AI helps assure safety and facilitate regulation.
- A commitment to data quality, such as through rigorously evaluating systems pre-release, is vital to ensuring systems do not amplify biases and errors.
- The challenges posed by important, complex regulations—such as the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the US—are addressed with an emphasis on understanding the scope of jurisdiction and consent requirements, in service of privacy and data protection.
- Fostering collaboration between regulatory bodies, patients, healthcare professionals, industry representatives, and government partners can help ensure products and services stay compliant with regulation throughout their lifecycles.
With these areas for regulation outlined, governments and regulatory bodies can follow them and hopefully develop some regulations to protect healthcare professionals and patients, and also use AI to its full potential in healthcare.