London (Parliament Politics Magazine) – General practitioners (GPs) have been employing ChatGPT to treat patients, a Harvard study has cautioned.
Researchers at the American university discovered one in five family doctors in the United Kingdom had used artificial intelligence means while treating patients, despite a lack of regulation. The survey of 1,006 GPs discovered dozens were using AI to help diagnose conditions and find treatment options.
A quarter of the 205 who used machine-learning tools to assist them do their jobs said they had asked the software to recommend treatments. Nearly three in 10 said they had used AI to assist diagnose a patient. Others revealed they had used it to write letters, generate documents after an arrangement with a patient, or create patient outlines and timelines based on records.
What are the potential risks of AI in healthcare?
Experts cautioned that unregulated use of tools such as ChatGPT, Microsoft’s Bing AI, or Google’s Bard, could “risk harm and damage patient privacy”. The study, which concerned disseminating a survey to family doctors via doctors.net.uk in February this year, was the biggest of its kind to evaluate the use of AI in medical practice. ChatGPT was the most generally used AI tool, with 16 percent of GPs confessing to using the chatbot, which launched in 2022.
What are the limitations of using AI in medical practice?
AI is already being utilised in other NHS settings, for example, enabling radiologists to interpret scans or build personalised 3D images of tumours, as well as helping with administrative tasks such as booking in patients. But the researchers cautioned there was a “lack of guidance” and “unclear work policies” for AI in general practice. They warned doctors about the technology’s regulations because it “can embed subtle errors and biases”.
The study was executed by an international team led by Dr Charlotte Blease, a healthcare investigator at Harvard Medical School and associate professor at Uppsala University in Sweden.
“These findings signal that GPs may derive worth from these tools, especially with administrative tasks and to help clinical reasoning. However, we warn that these tools have limitations since they can implant subtle errors and biases,” the authors wrote.
“They may also threaten harm and undermine patient privacy since it is not clear how the internet businesses behind generative AI use the information they gather.” The researchers stated it was “unclear” how legislation to regulate AI in medical practice would operate in reality and called for doctors to be instructed about the benefits and risks.