A presidential executive order aimed at establishing a new set of guidelines and intended regulations for Artificial Intelligence (AI) has been issued by the United States government.
The Joe Biden administration on 30 October released what it describes as a new standard for AI safety and security, including calling for the responsible use of the technology in the healthcare sector in hopes of driving new drug discovery.
The executive order will require AI developers to share their safety test results and other critical information with the U.S. government, as well as calling for the development of methods to protect against the risks of using AI to engineer dangerous biological materials.
The new directive is also calling for voluntary commitments from companies working in the field of AI to develop methods intended to better ensure that, as the field of research continues to grow, the safety and personal security of US citizens is preserved.
The executive order reads: “By developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.”
The US directive comes after the UK Prime Minister, Rishi Sunak, vowed to tackle fears surrounding AI ahead of the world’s first AI safety summit.
The US president’s executive order focuses in large part on how AI could be effective in driving the future of healthcare, especially in the fields of drug discovery, but also how if unregulated could lead to greater inequality in access to healthcare.
It also contains the announcement of a new pilot scheme, dubbed National AI Research Resource, intended to provide AI research tools to stakeholders and researchers.
The executive order reads: “Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing.
“The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI.
“Through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change.”
Thematic research carried out by GlobalData estimates that by 2027 the global AI market will be worth $323.3 billion, up from h $81.8 billion in 2022.
GlobalData is the parent company of Medical Device Network.