twitter
youtube
instagram
facebook
telegram
apple store
play market
night_theme
ru
arm
search
WHAT ARE YOU LOOKING FOR ?






Any use of materials is allowed only if there is a hyperlink to Caliber.az
Caliber.az © 2024. .
WORLD
A+
A-

Artificial intelligence is making critical health care decisions The sheriff is MIA

20 February 2024 02:55

POLITICO has published an article noting health care regulators say they need more people and more power to monitor the new tech. Caliber.Az reprints the article.

Doctors are already using unregulated artificial intelligence tools such as note-taking virtual assistants and predictive software that helps them diagnose and treat diseases.

Government has slow-walked regulation of the fast-moving technology because the funding and staffing challenges facing agencies like the Food and Drug Administration in writing and enforcing rules are so vast. It’s unlikely they will catch up any time soon. That means the AI rollout in health care is becoming a high-stakes experiment in whether the private sector can help transform medicine safely without government watching.

“The cart is so far ahead of the horse, it’s like how do we rein it back in without careening over the ravine?” said John Ayers, associate professor at the University of California San Diego.

Unlike medical devices or drugs, AI software changes. Rather than issuing a one-time approval, FDA wants to monitor artificial intelligence products over time, something it’s never done proactively.

President Joe Biden in October promised a coordinated and fast response from his agencies to ensure AI safety and efficacy. But regulators like the FDA don’t have the resources they need to preside over technology that, by definition, is constantly changing.

“We’d need another doubling of size and last I looked the taxpayer is not very interested in doing that,” FDA Commissioner Robert Califf said at a conference in January and then reiterated the point at a recent meeting of FDA stakeholders.

Califf was frank about the FDA’s challenges. Evaluating AI, because it is constantly learning and may perform differently depending on the venue, is a monumental task that doesn’t fit his agency’s existing paradigm. When the FDA approves drugs and medical devices, it doesn’t need to keep tabs on how they evolve.

And the problem for the FDA goes beyond adjusting its regulatory approach or hiring more staff. A new report from the Government Accountability Office, the watchdog arm of Congress, said the agency wants more power — to request AI performance data and to set guardrails for algorithms in more specific ways than its traditional risk assessment framework for drugs and medical devices allows, the GAO said.

Considering Congress has barely begun to consider, much less reach consensus on AI regulation, that could take a while.

Congress is traditionally loath to expand FDA’s authorities. And so far, the FDA hasn’t asked.

It has offered guidance to medical device makers on safely incorporating artificial intelligence, sparking an industry backlash from tech firms that say the agency has overreached — even though the guidance is legally nonbinding.

At the same time, some AI experts in academia and industry say the FDA isn’t doing enough with the authorities it already has.

Scope of authority

Advancements in AI have created big gaps in what the FDA regulates. It does nothing to review tools like chatbots, for example, and it has no authority over systems that summarize doctors’ notes and perform other critical administrative tasks.

The FDA does regulate first-gen AI tools as it does medical devices, and 14 months ago Congress granted the agency the power to allow makers of devices, some of which include early AI, to implement preplanned updates without having to reapply for clearance.

But the scope of FDA’s powers over AI are unsettled.

A coalition of firms filed a petition with FDA accusing the agency of exceeding its authority when it issued a 2022 guidance that says makers of artificial intelligence that offer time-sensitive recommendations and diagnoses must seek FDA clearance. Even though the guidance is legally nonbinding, companies typically feel they must comply.

The Healthcare Information and Management Systems Society, a trade group that represents health technology companies, also expressed confusion over the scope of FDA authority and how power over AI regulation is split among it and other agencies within the Department of Health and Human Services, like the Office of the National Coordinator for Health Information Technology. That office set rules requiring more transparency around AI systems in December.

“From the industry perspective, without having some sort of clarity from HHS, it gets into this area where folks don’t know directly who to go to,” said Colin Rom, a former senior adviser to then-FDA Commissioner Stephen Hahn who now leads health policy at venture capital firm Andreessen Horowitz.

Meanwhile, the FDA told GAO that to proactively track whether algorithms are effective over time, it needs new authority from Congress to collect performance data.

The agency also said it wants new powers to create specific safeguards for individual algorithms, rather than using existing medical device classifications to determine controls.

The FDA plans to communicate its needs to Congress.

Oversight outsourced

But that still leaves it beholden to a gridlocked Capitol Hill.

As a result, Califf and some in the industry have proposed another idea: the creation of public-private assurance labs, probably at major universities or academic health centers, which could validate and monitor artificial intelligence in health care.

“We’ve got to have a community of entities that do the assessments in a way that gives the certification of the algorithms actually doing good and not harm,” Califf said at the consumer electronics show last month.

The idea also has some support in Congress. Sen. John Hickenlooper (D-Colo.) has called for qualified third parties to audit advanced artificial intelligence. He’s thinking specifically about generative AI, the kind like ChatGPT that mimics human intelligence, though it is the same oversight framework Califf has suggested.

That approach could have flaws, as some AI experts have noted, since AI tested on a major university campus might not work as well at a small rural hospital.

“You know as a practicing physician that different environments are different,” Mark Sendak, population health and data science lead at Duke University’s Institute for Health Innovation, told senators at a Finance Committee hearing on artificial intelligence in health care this month. “Every health care organization needs to be able to locally govern AI.”

In January, Micky Tripathi, the national coordinator for health information technology, and Troy Tazbaz, FDA’s director of digital health, wrote in the Journal of the American Medical Association that assurance labs would have to take that problem into account.

The article, which was co-authored by researchers at Stanford Medicine, Johns Hopkins University and the Mayo Clinic, calls for a small number of pilot labs to lead the way in designing validation systems.

But seeing that collaboration among regulators, major universities and health care providers hasn’t reassured smaller players, who worry about conflicts of interest if the pilot labs are organizations that are also making their own AI systems or collaborating with tech firms.

Ayers thinks the FDA should be handling AI validation within its own walls and that makers of AI systems should at a minimum have to show that they improve outcomes for patients, regardless of who does the oversight.

He noted the failure of an AI system from electronic health records firm Epic to detect sepsis, a sometimes fatal reaction to infection, that had slipped by regulators. The company has since overhauled its algorithm and a spokesperson for the FDA said that it doesn’t disclose communications with specific firms.

But the incident has left many in health care and technology feeling like the agency isn’t using its current authorities effectively.

“They should be out there policing this stuff,” said Ayers.

Caliber.Az
Views: 118

share-lineLiked the story? Share it on social media!
print
copy link
Ссылка скопирована
WORLD
The most important world news