Artificial Intelligence has made a splash in the news in recent months with its ability to provide answers to complex questions that seemingly match – and sometimes exceed – what humans are capable of.
With a background in the field, Dr. Anthony Chang, wanted to test the capabilities of this new breed of computers. As CHOC’s Chief Intelligence and Innovation Officer, he knew exactly what he wanted to ask: Why is artificial intelligence important for pediatric healthcare physicians?
Dr. Chang was the first of four speakers during a recent CHOC Grand Rounds presentation that delved into that question and others. During the next hour, these medical professionals would ponder the practical, legal, ethical and moral questions of artificial intelligence in a talk titled “ChatGPT: How AI and Large Language Models are Revolutionizing Pediatric Care.”
Also participating in the talk were three other CHOC experts who work closely with the Sharon Disney Lund Medical Intelligence, Information, Investigation, and Innovation Institute (Mi4) at CHOC and specialize in medical innovation: Drs. Terence Sanger, Chief Scientific Officer, Sharief Taraman, pediatric neurologist and clinical informaticist, and William Feaster, Chief Health Information Officer (CHIO) until his retirement in 2023.
AI as a benefit to pediatric healthcare
AI continues to improve rapidly, says Dr. Chang. He defined AI as the ability to mimic human intelligence. He said it will give younger generations of doctors an advantage in how they treat their patients.
“The accuracy of ChatGPT and GPT-4 has, I think, increased noticeably in the last six months or so,” he says.
He noted that AI has been successfully helping interpret images like X-rays for years, and recent developments have enabled it to examine moving images. But challenges remain. Research published by CHOC found that humans still have a better chance of detecting congenital heart defects than computers.
“It might be simple for humans, but it’s still relatively challenging for AI,” Dr. Chang says.
Dr. Chang noted recent advances and investments by tech giant Google, which acquired DeepMind, a company involved in using AI to predict 3D-protein structure.
“I think this is going to be very impactful in terms of drug discovery and vaccine design in the future,” he says.
Dr. Chang is also optimistic about the advances in AI tool called robotic process automation, which could be used for administrative tasks. He estimated that the global market for such technology in healthcare will be more than $1 billion.
“I think a lot of the healthcare administration tasks that are relatively repetitive and mundane even can be taken over by this automated way of doing administrative tasks,” he says.
AI is advancing dramatically and quickly
AI has advanced in recent years because of several main factors, according to Dr. Chang: The development of algorithms, the availability of cloud computing and computer processors that grow more and more powerful. The access to large sets of data has been a particular boon for the healthcare field.
“There’s more data than ever before,” he says. “And most of the healthcare data has only been available in the last five years.”
Other recent advances include the ability of AI computers to complete more than one task at the same time, as well as the ability to mimic the way the human brain behaves, a process known as “deep learning.”
While they seem to have mind-boggling abilities, large-language models like ChatGPT are still in their early development. As they continue to progress, they will help doctors keep up with the new ever-accelerating rate of medical advances. Dr. Chang said that when he was in training, medical knowledge doubled every couple of years. Now, it doubles every few months.
“I think it’s the beginning of a very exciting decade, much more exciting than a previous decade in terms of applying artificial intelligence to healthcare and medicine,” he says.
The future of AI in healthcare
Even though computers can communicate in ways that seem human, Dr. Chang asked if what they possess can be described as knowledge or as information. He said that they had yet to achieve knowledge, but that it could happen someday as they gain more and more information.
“We can perhaps think of ChatGPT as being able to provide ‘super information,’” says Dr. Chang.
“Of course, we’re still a long way from wisdom,” he says. “Wisdom, I think, is still something that requires humans to spend time with, and then it comes with experience. So maybe someday we’ll see wisdom as well in our artificial intelligence tools. But I think it’s going to be a decade or two off.”
Until then, Dr. Chang cautioned that computers are susceptible to making errors. But people who use them can help steer them away from mistakes and teach them to be more accurate.
“I think humans need to be a little bit forgiving in the early stages of these large-language models,” he says. “It’s not going to be perfect, just like humans are not perfect.”
Dr. Chang sees a future that includes intelligence-based pediatrics, which uses all a patient’s available data, not just textbook information. In the future, doctors will have instant access to information that will enable them to make speedy bedside decisions. Another possibility is to create a virtual version of a patient so doctors can determine a proper management plan. And they will be able to share their information globally. He said the future is wide open and likened the current era to children learning to ride a bicycle.
“Please, especially the younger generation, please learn something about AI and how that can be applied to pediatrics,” Dr. Chang says. “We really need your help.”
Addressing the criticisms of AI
Dr. Taraman then talked about the uses and misuse of artificial intelligence in clinical care. A specialist in child neurology, Dr. Taraman said that there are already many uses of AI that don’t involve clinical care – time-saving applications such as registering patients and scheduling that can be used with natural language instead of cumbersome computer programs. He estimated that doctors spend more than 15 hours a week on tedious administrative tasks, something he said was unacceptable.
“These are quick wins in the healthcare system from a clinical aspect where large-language models can dramatically improve efficiencies,” he says.
One challenge facing doctors today is the constant onslaught of new data that is often difficult to access. It’s easy enough to look up lab reports, but reading through many notes can be difficult and important information may be missed.
“Why did they prescribe this medicine and what was their rationale?” Dr. Taraman asks. “The fact that we now have large-language models opens up our ability to gain and extract knowledge and understanding from healthcare data that we’re generating at huge volumes.”
Learning about AI large language models like ChatGPT work is important so doctors can know their abilities – as well as their limitations. Dr. Taraman mentioned the penchant physicians have for using acronyms when they write notes. The problem is that the same acronym can refer to more than one diagnosis or procedure.
“We should never use acronyms when you’re writing your notes,” he says. “But we do it because it’s burdensome. What we want to do is we want to be able to flip these things. If I was dictating and I said, ‘This patient has CHF, you wanted it to convert to heart failure automatically.’”
Another thing AI can do is search medical records and fill in missing information in a patient’s discharge papers. In one instance, a computer identified information missing from discharge summaries.
In another study, both doctors and an AI chatbot were asked the same questions, such as, “What is the risk of dying by swallowing a toothpick?” Both answers were evaluated by a blinded group of clinicians for accuracy and empathy. Overall, the detail and empathy provided by the AI was rated better than the doctor’s answer. It wasn’t that the human doctors didn’t know the answer or lacked empathy, but that they were busy and didn’t have the time to provide the well-rounded response that the computer did.
“It’s not actually that the AI chatbot was more empathetic,” Dr. Taraman says. “it’s a matter of time constraints. If you have a busy service and trying to respond to answers, the responses will likely be brief and therefore result in less empathy. Can we use things like this to help us not lose our empathy as clinicians, not get burned out and have that exposure that reduces the quality of the care? The answer is absolutely and resoundingly yes.”
But AI can have biases that it learns from humans. If bad information goes into the system, bad outcomes can be the result. For example, medical records that contain structural racism, this information can find its way into the AI.
“We have a lot of bias that’s baked into the way that we practice medicine and the research that we’ve done,” Dr. Taraman says.” And those biases are long-lived. AI has the potential to amplify or magnify those problems. So, if you have a biased medical record and you apply ChatGPT to it, you’re going to amplify the bias within the medical record.”
The role of AI in medical research
That led to a discussion on the potential use and misuse of artificial intelligence in research by Dr. Sanger, CHOC’s chief scientific officer. He said that large language models are based on statistical analysis that can take millions of data points to determine a specific response.
“It has a remarkable ability to be a reasonable facsimile of human behavior,” he says. “But it’s human behavior based on past behavior. And what I’m going to claim is that research is about innovation and coming up with new things. So fundamentally, there’s nothing here that can create anything new.”
Since systems like ChatGPT are based on things people have already produced, who or what is responsible for its outcomes? Who gets the credit – or blame – when they make mistakes? Humans are to blame, Dr. Sanger says. They created the information that the computer analyzes.
“Only humans have agency,” he says. “Only humans have responsibilities. Those responsibilities arise from ethics and the ethical conduct of research.”
ChatGPT is “a tool like anything else,” he says. “It’s the same thing as if you use a calculator to do calculations. In the hospital, you were responsible for making sure that the result of that calculation is accurate. You can’t blame the calculator for the miscalculation.”
While computers can process huge amounts of data, unlike humans, they don’t have experiences outside of their circuits.
“We’re drawing upon our experience as humans, experience of the world, experience that we’ve seen interacting with other humans,” he says. “Just the way a rock falls, the way water ripples, the way the air feels on your skin. There is all that information that contributes to your ability to make inferences, and computers don’t have access to that.”
And the information that computers do possess is skewed because they rely on successful research and ignore negative results. Or they can’t differentiate between good and bad research. For instance, Dr. Sanger mentioned research that links vaccines and autism. “They’re based on an initial false paper that was that was known to be fraudulent,” he says. “The AI would not know that?”
Systems like ChatGPT rely on information that already exists. They don’t have the ability to experiment.
“I know enough musical composition theory that I could write perhaps a sonata in the style of Mozart,” Dr. Sanger says. “That doesn’t make me Mozart. Mozart did it because Mozart is famous because Mozart did it first.”
Ultimately, AI will propel humans to strive harder to come up with fresh ideas, to innovate.
“I think it’s going to push us to really raise our own standards for what we expect of ourselves in research and innovation,” Dr. Sanger says.
The legal ramifications of AI
Finally, Dr. William Feaster discussed the legal, ethical and regulatory issues of large language models. He cautioned doctors to stay abreast of the latest developments in technology.
“You’re not going to be replaced by a robot,” he says. “You’re going to be replaced by people who know how to use a robot.”
He referenced a scene in the Stanley Kubrick classic science-fiction film “2001: A Space Odyssey” where a sentient computer tries to lock an astronaut out of the spaceship. The scene highlights a fear some people have that AI can be dangerous to humans. Even Congress is holding hearings on the matter, a situation that troubles Dr. Feaster.
“Just what you don’t want to have to happen is to get Congress involved in passing laws about something that’s as technical and complicated as AI because they don’t understand anything about it,” he says.
The legal issues around doctors using artificial intelligence. Can doctors be sued for using AI? Can they be sued for not using AI?
“There’s lots of issues here that aren’t very well known yet,” Dr. Feaster says. “This is kind of a new area of law. What’s not new is what’s required for a successful malpractice case in that you have to establish a duty of care with a patient.”
Doctors have a duty to provide the best care they can. When they deviate from that standard and a patient is injured, they open themselves up to lawsuits. When does AI fit into that situation? Who would be responsible? Dr. Feaster called the current situation “the wild west” and said the government would probably have to get involved to help sort things out.
“Would an algorithm that you are using lead to patient injury?” Dr. Feaster says. “It’s not just the clinician here too, because someone developed that algorithm and then you have other people, maybe companies, individuals who would have some liability, multiple other players because you’re not developing it yourself.”
In terms of regulations, Dr. Feaster noted that healthcare is probably the most heavily regulated industry in the world. A current question facing government regulators is whether a software system like AI can be considered a medical device.
Privacy is also a concern. Dr. Feaster says that a patient’s personal information should not be entered into AI systems.
There are also ethical questions that need to be answered. A computer system that is based on a population that is different from a particular patient, the system could discriminate against them. These kinds of biases already exist in hiring and financial situations.
Artificial intelligence systems “shouldn’t reinforce existing inequities or embed new biases,” Dr. Feaster says. “It’s important that how we train our tools, that the data being used to develop those algorithms fairly represents the population that we’re serving.”
Learn more about medical innovation at CHOC