When people talk about the risks artificial intelligence poses to humanity, they tend to focus on the most catastrophic outcomes: machines that operate beyond our control, or synthetic biology guided by AI creating bioterrorism agents, like smallpox. Those are legitimate and deeply concerning risks. In public health, some of the most dangerous crises come not only from what is real, but from what people believe is real.
I’ve spent much of my career investigating infectious disease outbreaks and suspected bioterrorism incidents. I’ve seen how even a rumor can spread fear and confusion. Now imagine that fear fueled not by word of mouth, but by images, voices, and videos that appear completely authentic and are designed to manipulate. That is the immediate threat AI poses to public health.
In a new article for STAT, I outline a scenario that’s disturbingly plausible: an AI-generated simulation of a smallpox outbreak in a region of geopolitical tension, complete with fabricated patient videos, doctored lab reports, and fake audio of overwhelmed doctors. If such content were disseminated simultaneously across social media, and echoed by influencers and elected officials, the result could be panic, misdiagnosis, overwhelmed health systems, and, if the setting involves rival powers, military escalation. It could lead to a global response, including a declaration of a public health event of international concern.
Verifying an outbreak requires on-the-ground investigation combined with specimen collection and diagnostic testing. But if governments and the public have already made up their minds based on convincing fakes, the truth may not matter. Public health officials could be sidelined, overruled, or too late to stop decisions made by elected officials or military leaders.
While it remains critical to address long-term threats from AI and synthetic biology, we also need to confront this more immediate challenge. That means training health professionals to spot and respond to deepfakes, creating protocols for media authentication, and building stronger connections between public health, technology companies, and security agencies.
A disease doesn’t have to exist to cause harm. In this new era, perception alone may be enough to start a war—or paralyze a country’s response to the next real outbreak.
The Immediate Public Health Risks from AI-Driven Disinformation
While existential threats from AI often dominate headlines, the more urgent risk may come from realistic fake outbreaks created using current AI tools. Below are key areas where this threat can destabilize global health and security.
1. Deepfakes Undermining Public Trust
AI-generated images, audio, and video that simulate real patients, doctors, or news reports can convince the public of an outbreak even when none exists. Once trust is broken, health authorities may struggle to reestablish credibility, even with real evidence in hand.
2. Geopolitical Escalation from Fabricated Crises
A fake outbreak in a politically tense region can spark military reactions before public health investigations can confirm or disprove the threat. If two nuclear-armed nations interpret a deepfake event as a biological attack, the consequences could be catastrophic.
3. Delayed or Misguided Emergency Response
Emergency responses based on false data can misallocate resources, delay real care, and cause unnecessary panic. Health systems may be overwhelmed not by a real pathogen, but by fear-driven behavior and policy decisions.
4. Historical Precedents Amplify the Risk
From medieval plague rumors to Cold War HIV disinformation, history shows how infectious disease lies can trigger violence and public health collapse. Modern AI makes it far easier and faster to spread these lies with apparent credibility.
5. Conflicting Standards Between Health and Security
In fabricated outbreak scenarios, public health agencies and military or intelligence bodies may disagree on what counts as sufficient proof. Security officials may demand airtight evidence before de-escalating, while public health experts may already know the threat is fake.
6. Lack of Deepfake Detection Protocols
Most health and security systems lack processes to vet the authenticity of media content during a crisis. Without verified protocols and tools in place, officials might act on deepfakes instead of facts, especially under political pressure.
7. Urgent Need for Training and Collaboration
Health professionals need training to recognize AI-generated content. Governments must also create alliances between health, tech, and security sectors to build detection infrastructure and response protocols. Without these, fake crises may do as much harm as real ones.