The focus in Washington is on how agencies should regulate its use in the private sector, with FDA planning rules for its use in healthcare.
In healthcare, researchers are using machine learning to develop new treatments. AI has even helped researchers identify high-risk strains of Covid-19.
Doctors use it to diagnose diseases and plan treatment.
At the summit, Shannon Time Klinger, Moderna’s general counsel, highlighted the possibility that AI could accelerate vaccine development and diversify the populations involved in research.
But imperfect algorithms can harm patients. AI has turned off coverage for some Medicare Advantage members and, in some cases, has perpetuated racial bias in care.
“A really robust set of fairness and bias testing guidelines is needed,” Hirsh Jain, head of public health and senior vice president of the federal division of Palantir Technologies, a Denver-based software developer, told the summit.
Jain said the federal government and industry should work together to develop fencing to avoid a patchwork of state-written rules.
Chris Ross, head of information technology at Mayo Clinic, said he recognizes the need for caution when using patient data in AI and using AI to make critical medical decisions.
However, Mayo is moving forward with technology. Google recently announced that its artificial intelligence will be built into Mayo Clinic’s computer systems to improve patient care.