As artificial intelligence (AI) transforms healthcare, the World Health Organization (WHO) warns that insufficient legal safeguards could put patients and health workers at risk. AI is already being used to detect diseases, reduce workloads, and improve patient care, but gaps in regulation threaten safety and equity.
The WHO Regional Director for Europe, Hans Kluge, said, “The technology is reshaping how care is delivered, data are interpreted, and resources are allocated. But without clear strategies, data privacy, legal guardrails, and investment in AI literacy, we risk deepening inequities rather than reducing them”.
The report, the first comprehensive assessment of AI adoption and regulation in Europe, surveyed 50 of 53 countries. Nearly all countries see AI’s potential, from diagnostics to disease surveillance to personalized medicine, but only four have a dedicated national AI strategy, with seven more developing one.
Some countries are taking proactive steps: Estonia links electronic health records, insurance data, and population databases in a unified platform supporting AI tools; Finland invests in AI training for health workers; and Spain is piloting AI for early disease detection in primary care.
However, regulation is struggling to keep up: 86% of countries report legal uncertainty as their top barrier, 78% cite financial affordability, and less than 10% have liability standards for AI in health.
The WHO report recommends clear liability rules, transparency, and explainable AI. It urges countries to develop AI strategies aligned with public health goals, invest in an AI-ready workforce, strengthen legal and ethical safeguards, engage the public, and improve cross-border data governance.