Social media is awash with digitally created “deepfake” videos that exploit the trusted identities of renowned doctors to promote hazardous miracle cures for serious health issues, experts warn. Videos on Facebook and Instagram leverage the credibility of popular TV doctors to advertise untested “natural” syrups for diabetes, with some even falsely claiming that the well-established first-line drug metformin “could kill” patients.

Experts are concerned these scams could endanger lives, especially as they utilize the likenesses of well-known health experts, such as British TV presenter Michael Mosley, who passed away earlier this year.

“People do seem to trust these videos,” noted British doctor John Cormack. “A lot of these media doctors have spent considerable time building an image of trustworthiness, so they are believed even when making incredible claims,” added Cormack, who has collaborated with the British Medical Journal (BMJ) on this topic.

Artificial intelligence (AI) expert Henry Ajder observed that doctor deepfakes “really took off this year.” These AI-generated videos often target older audiences by impersonating doctors who frequently appear on daytime television.

French doctor Michel Cymes, a regular TV figure in France, revealed in May that he was taking legal action against Facebook owner Meta over “scams” using his image. British doctor Hilary Jones even hired an investigator to track deepfakes featuring his likeness. One such video falsely depicted Jones promoting a fake cure for high blood pressure and weed gummies on a UK TV show he regularly appears on. “Even if they’re taken down, they just pop up the next day under a different name,” Jones lamented in the BMJ.

‘Game of Cat and Mouse’

Recent advancements in AI have significantly enhanced the quality of deepfake images, audio, and video, explained French academic and AI expert Frederic Jurie. “Today, we have access to tens of billions of images, enabling us to build algorithms that can model and regenerate everything seen in those images. This is what we call generative AI,” he said.

It’s not just the likenesses of widely respected doctors being misused. The appearance of controversial French researcher Didier Raoult, accused of spreading misleading information about COVID-19 drugs, has also been used in several deepfake videos.

Australian naturopath Barbara O’Neill, condemned for claiming that baking soda can cure cancer, has been falsely depicted selling pills that “clean blood vessels” in TikTok videos. Contacted by AFP, her husband Michael O’Neill deplored that “a lot of unethical people” were using his wife’s name “to sell products she does not recommend, and in some cases, they are outright scams.”

Some fake videos spiral even further, falsely claiming that O’Neill died from a miracle oil sold on Amazon. AI expert Adjer was not surprised that controversial health figures were also falling victim to deepfakes. “They are highly trusted by people in circles that, let’s say, are unorthodox or conspiratorial,” he said.

Experts are not optimistic that newly developed AI detection tools can combat the surge of deepfakes effectively. “It’s a game of cat and mouse,” Jurie remarked. Rather than trying to identify all the fake videos, he pointed to technology that can “ensure content has not been altered, such as through messaging software that produces digital signatures like a certificate,” he said.