Healthcare is the new frontline in cyber warfare. In the past 12 months, according to a survey of more than 1 000 healthcare organizations worldwide, nearly half have experienced at least one security incident that required a dedicated response from their security team. Close to 75 000 healthcare-related incidents were detected globally in 2025, with the US leading the charge, followed by India, Russia and the UK as the most targeted nations.
From healthcare record breaches to ransomware attacks
Electronic health records (EHRs) are a digital goldmine containing general health, diagnosis and billing information that make up each person’s medical identity. These can be sold on the dark web for reasons ranging from identity theft to insurance fraud and blackmail, often facilitating double extortion. A single healthcare record sold for USD 250 on the dark web in 2024, whereas credit cards fetched only USD 5. In worst case scenarios, cyber attacks can paralyze healthcare facilities through ransomware attacks and even threaten patients’ lives.
2024 was an annus horribilis for healthcare record breaches in terms of the number of people who saw their private health information compromised: in the US, some reports say it affected over 80% of the population. The FBI’s 2024 Internet Crime Report identified healthcare as the critical infrastructure industry with the highest number of reported cyber threats, totaling 444 incidents, which included 238 ransomware threats and 206 data breaches. Not content with targeting facilities providing direct care, such as hospitals, hackers have also set their sights on peripheral healthcare providers, including medical device manufacturers, healthcare billing providers and pharmaceutical companies.
The Change Healthcare ransomware attack of early 2024 was the landmark incident that exposed deep flaws in the US healthcare ecosystem. Since then, hundreds of incidents of varying sizes have occurred around the world. This summer, a data theft from a Dutch cervical cancer screening programme compromised almost half a million records, while in the UK, a patient died due to delays caused by a ransomware attack on Synnovis, an agency that manages labs for NHS trusts and doctors in south-east London.
Hospitals and healthcare-related outlets are targeted more than other industries because of the large amount of hackable digital records – each patient has one – and because healthcare providers cannot compromise the health of patients and refuse to pay ransomware, for example. These two factors combined make them ideal victims. In addition, they often have not invested in the latest cyber security tools. Finally, the increasing use of artificial intelligence (AI) – in addition to digital tech – for diagnosis and other medical acts creates its own vulnerabilities.
How AI is used in Swiss healthcare
Switzerland’s healthcare system is not immune to cyber threats. In early 2025, a report by the National Test Institute for Cyber security (NTC) found 40 moderate-to-severe vulnerabilities across Switzerland’s three most widely used hospital IT systems. Then, in June 2025, a Swiss health promotion non-profit organization serving various federal offices suffered a ransomware attack that released 1,3 terabytes of data, including financial records and internal communications, to the dark web.
August 2025 saw 18 Swiss health groups, including cantonal and university hospitals, founding a national cyber security centre to provide early warning systems against cyber attacks. A wise move, considering the increasing use of AI for enhancing diagnostics, personalizing treatment, making healthcare accessible and boosting operational efficiency. While AI is increasingly used as a tool in Swiss healthcare, it is also helping cyber criminals to hack existing systems.
Examples of the increasing use of AI in Swiss healthcare abound. In Geneva, the public university hospital (HUG) recently announced the creation of an AI hub focused on neurological and psychiatric conditions, while in Bern, the hospital group, the Insel Gruppe, is increasingly leveraging AI across various areas to enhance the efficiency and effectiveness of its operations. “AI is not used to make autonomous clinical decisions. Instead, it serves solely as a supportive tool to assist healthcare professionals in providing optimal patient care,” explains Urs Meier, Head of IT Governance, Risk and Compliance at the Bern-based organization. He adds that AI represents both an opportunity and a potential risk in the ongoing effort to protect hospitals and the medical services from cyber attacks.
Hackers use AI too
Despite the promises of AI, the healthcare industry in most countries still operates within or alongside legacy systems that are running on outdated infrastructure with vulnerabilities: a Citrix portal that lacked multi-factor authentication (MFA) was the culprit in the Change Healthcare debacle, for example.
Hackers are using AI assistants or large language models to coordinate attacks, while the integration of AI into medical settings is creating new internal vulnerabilities that cyber criminals can exploit. “The use of external AI tools increases the risk of critical data leaks that could also constitute a legal incident”, explains Eugene Neelou, who works for a large San Francisco-based cyber security firm. “On-premises AI tools used to access internal knowledge, like medical references and patient data, or to process medical imaging also significantly expand the attack surface for cyber criminals. Malicious emails or calendar invites might contain indirect prompt injections that exploit the victim AI system, while publicly exposed chatbots with access to internal data could be manipulated to reveal health data or access the backend system for further exploitation,” he adds.
“AI-powered devices can be manipulated with malicious inputs that could lead to insurance abuse. In addition, compromised software and malicious model updates could sabotage medical operations and result in wrong diagnosis at scale, potentially leading to deadly incidents,” he continues.
Agentic AI is expected to create future vulnerabilities
The rapid introduction of agentic AI – autonomous “agents” that interact with sensitive data and systems without the intervention of humans – has also created a critical threat. In October 2024, Gartner named agentic AI the top technology trend of 2025 and predicted 33% of enterprise apps will include agentic AI by 2028. “Agentic misbehaviour incidents show how, when granted autonomy, AI agents can cause trouble in production environments, execute destructive actions or allow attackers to control the system,” Neelou explains.
While 2025 saw a big surge in cyber attacks involving AI worldwide, no healthcare breaches have yet been directly attributed to its use, but it’s just a matter of time, according to Neelou. “Today, ransomware is still the number one threat, but criminals are already using AI to amplify their attacks,” he explains. “Due to the state of AI security technologies, true and effective attacks against AI are likely to stay undetectable for a long time. They are most likely happening already, unbeknown to compromised companies.”
How can standards help?
New AI-based processes require specific AI runtime security controls, according to Neelou. “Adding AI to systems makes them less secure, because many AI vulnerabilities are inherent. Traditional security solutions can’t address these unique AI risks and specialized AI security solutions are required. Traditional security principles are still relevant but require re-thinking for the AI world,” he indicates.
The Open Worldwide Application Security Project (OWASP), to which Neelou’s company contributes, is a non-profit foundation that works to improve the security of software. It’s an open community dedicated to enabling organizations to conceive, develop, acquire, operate and maintain applications that can be trusted. An industry standard is also being developed to protect against the defaults of agentic AI. A2AS: Agentic AI Security Framework claims to define security boundaries, authenticate prompts, apply security rules and custom policies, and control agentic behaviour, enabling a defense-in-depth strategy. It is also an open source project to which many of the big tech companies contribute to.
International standards from the IEC also play a significant role in shaping this landscape, including frameworks like ISO/IEC 27001 for information security, cyber security and privacy protection. IEC 62443 specifically addresses the industrial automation and control systems (IACS) used in critical infrastructure, including healthcare. Whereas IT security focuses in equal measure on protecting the confidentiality, integrity and availability of data – the so-called “C-I-A triad” – in the world of operational technology (OT), availability is of foremost importance. Priorities for OT environments focus on health and safety, and protecting the environment. In the event of an emergency, it is vital that hospital operators receive accurate and timely information and can quickly take appropriate actions, such as shutting off power or shifting to backup equipment, to protect personnel or patients. IEC 62443 provides comprehensive guidance to protecting the safety, integrity, availability and confidentiality in IACS.
The highly anticipated fourth edition of IEC 60601-1 announced by technical committee IEC TC 62, which prepares standards for medical equipment, is expected to reach completion by the summer of 2029. It represents a major evolution in medical device safety standards, according to TC 62 expert Ayub Yancheshmeh. “For the first time, it introduces formal requirements for cyber security and artificial intelligence, reflecting the growing complexity of connected and intelligent medical technologies,” he describes.
This update to IEC 60601-1 includes a focus on cyber security risk management through the implementation of threat modelling, vulnerability testing and the provision of secure integration instructions. Data protection and system integrity are now considered part of essential performance. Devices using AI must also meet requirements for transparency, bias detection, performance verification and human-AI interaction.
“Because AI models are complex and often opaque, subtle manipulations may go unnoticed, especially in connected devices that rely on cloud updates or external data. This fourth edition addresses these risks by embedding AI-specific threat modelling, requiring manufacturers to anticipate how AI could be misused and build safeguards like input validation and anomaly detection,” Yancheshmeh adds.
In addition, TC 62 is addressing AI in the medical field through its Software Network and Artificial Intelligence Advisory Group (AG SNAIG), and several projects are currently in progress, including, for instance, the first edition of IEC 63450, which deals with the testing of AI/machine learning-enabled medical devices. While AI and cyber threats are evolving rapidly and are increasingly being used to target hospitals, the need for cyber security standards in healthcare is more acute than ever.










Archivio Numeri