Health care leaders say AI could save countless lives, but there are difficult ethical questions that must be confronted.
Artificial intelligence is poised to bring seismic changes in health care, and even at a health technology conference, there’s a mix of excitement and anxiety about that prospect.
During the opening keynote session at the HIMSS Conference on Tuesday morning, a panel of experts discussed the opportunities and hazards surrounding the use of AI in healthcare.1
The opening keynote discussion Tuesday at the HIMSS Global Health & Technology Conference vividly illustrated some of the concerns about AI, and ChatGPT, playing a greater role in health and medicine.
Peter Lee, corporate vice president of research and incubation at Microsoft, acknowledged some of the fears about AI in the delivery of healthcare. Microsoft has invested billions of dollars in the tech startup OpenAI, the producer of ChatGPT, which writes text, stories and answers questions posed by millions of users.
“There are significant awe-inspiring benefits, but also some scary risks,” Lee said in the morning session attended by thousands of health care professionals.
“There are tremendous opportunities here,” he added. “But there are also significant risks, and risks we probably don't even know about yet.”
Researchers have been using AI to determine the potential risks of patients having strokes, pregnancy complications, and patients who could experience delirium in the hospital.
But health care leaders, technology experts and ethicists predict that ChatGPT carries the potential to revolutionize healthcare and research.
The panel included Kay Firth-Butterfield, chief executive officer of the Centre for Trustworthy Technology. A former judge, she raised some of the thorny legal questions involving the use of AI in healthcare, including questions of accountability. As she asked, “Who do you sue when something goes wrong?”
Reid Blackman, the CEO of Virtue Consultants and an AI ethics advisor, said that the general public probably doesn’t grasp the abilities and limitations of ChatGPT.
“It’s a word predictor that does amazing things, but it’s a word predictor,” Blackman said. “The average person will think they are engaging with deliberating machine, but it’s a word predictor.”
Blackman added that he isn’t opposed to innovation and supports using artificial intelligence in healthcare. But he says health care leaders should be pushing for enterprise-wise governance of AI.
“It looks like it has phenomenal capacities that can save countless lives,” he said. “The question is, do we have a systematic way of assessing the risks and opportunities on a use case basis? And if you don't, then things will fall between the cracks, and you will miss out on opportunities and you will also inadvertently cause great harm.”
Firth-Butterfield also raised the questions of health equity in the discussion of AI in healthcare. More than 100 million users have already engaged with ChatGPT, an enormous number since its launch in the fall. But as she noted, three billion people worldwide have no access to the internet.
Citing the questions of health equity, Firth-Butterfield was one of many leaders who signed a highly-publicized letter calling for a 6-month pause on development of AI systems more powerful than ChatGPT-4. Elon Musk, who co-founded OpenAI, the developer of ChatGPT, and Apple co-founder Steve Wozniak were among those who signed the letter.
“How do we design a future allowing everyone to access these tools? That’s why I signed the letter,” Firth-Butterfield said.
Andrew Moore, founder and CEO of Lovelace AI, urged the healthcare leaders in the large audience to get very involved in artificial intelligence. Health systems and health care organizations shouldn’t wait for the next iterations of new AI models to get started.
“Never think about artificial intelligence as being a thing which those amazing Silicon Valley people can do, and we'll wait and see what happens,” Moore said. “It's actually all the responsibility is within this room.”
“Don’t wait for a small number of experts in Silicon Valley and Carnegie Mellon to do this for you,” he says. “It won’t happen.”
Moore described spoken dialogue systems as “the new browser.” More consumers, and doctors, will be communicating through conversation interfaces, he said.
“You should start working on that right away,” Moore said.
In addition to the ethical questions surrounding AI in healthcare, Lee said there are still deep scientific mysteries to explore with AI.
Referring to ChatGPT, Lee said he is frustrated by criticism that Microsoft “sprung this on the world.” He said the company expected 1 million simultaneous users, which is now just a fraction of the millions who have at least dabbled with ChatGPT.
“ChatGPT was a complete shock to all of us,” Lee said.
“We had no clue based on our previous publications that anyone cared,” he added. “In a way it’s been sprung on all of us.”
Lee told the audience of healthcare leaders to study AI so they can make good decisions about its use in health care.
“I think the one thing I would urge people to do is get hands on yourself, and really try to get immersed and understand the technology firsthand,” Lee said. “And then, work with the rest of the health care community to ensure that it’s this community that owns the decisions about whether this technology is appropriate for use at all, and if it is, in what circumstances.”
This article originally appeared on Chief Healthcare Executive.