On March 5, 2025, the European Union officially published the European Health Data Space (EHDS) Regulation, marking a significant step toward a unified digital health infrastructure. Designed to enable secure, cross-border access and exchange of electronic health data, the EHDS is set to transform patient care and medical research.
The timeline for its implementation has changed. The EHDS enters into force on March 26, 2025. By 2029, its core functionalities, including the exchange of patient summaries and ePrescriptions, will be fully operational. By 2031, additional health data types—including medical imaging, genomic data, lab results, and hospital discharge reports—will be integrated, making the EHDS a central repository for health data across the EU. While this infrastructure is meant to provide high-quality, standardized datasets for AI innovation, the AI Act does not yet clearly define how AI systems can access EHDS data, leaving some crucial gaps in regulatory alignment. We have previously written about EU’s AI Act and the implications on medical device sector.
AI in Healthcare
AI is already reshaping healthcare. Medical imaging AI detects diseases such as cancer, fractures, and retinal conditions with better accuracy than human radiologists. Predictive models analyze patient data to anticipate complications, optimizing treatment strategies in intensive care units and reducing hospital readmission rates. AI-driven drug discovery accelerates research, cutting years off traditional development timelines by identifying new compounds and repurposing existing drugs. Personalized treatment plans powered by AI are transforming oncology and genetic medicine, tailoring therapies to individual patient profiles. Meanwhile, AI-driven hospital automation streamlines administrative tasks, reducing costs and improving resource allocation.
None of these advancements are possible without high-quality, well-structured health data—precisely what the EHDS aims to provide. But without clear legal provisions linking the EHDS to the AI Act, developers remain in a regulatory limbo.
EHDS and AI Act
The AI Act, introduced to regulate artificial intelligence, classifies healthcare AI as high-risk, requiring strict data quality, transparency, and human oversight measures. However, while the Act mandates that AI be trained on high-quality datasets, it fails to define what qualifies as high-quality health data.
The EHDS follows FAIR principles (Findable, Accessible, Interoperable, and Reusable), ensuring data is standardized and easy to use. Yet, nowhere in the AI Act is it stated that EHDS datasets automatically meet compliance requirements. Developers face uncertainty—can they use EHDS data for AI training? Will they need to validate the data separately?
Regulatory Gaps and Unresolved Issues
The role of EHDS in AI governance remains ambiguous. While the EHDS mentions AI in its recitals, these are non-binding statements. There is no legally enforceable obligation that mandates the EHDS serve as a dedicated data hub for AI development. If EHDS data is meant to be a trusted training source, why isn’t this clearly defined in the AI Act? Without an explicit legal framework, the EHDS risks becoming an inaccessible resource for AI developers.
Data protection concerns are another roadblock. The AI Act’s Recital 45 allows data reuse for AI training, but does not specify how sensitive health data should be protected. The European Commission’s 2023 study on health data processing found that GDPR implementation is fragmented across EU member states, restricting cross-border access. AI developers face inconsistent data protection laws depending on jurisdiction, creating barriers to innovation. If EHDS data is meant to facilitate AI development, why hasn’t GDPR harmonization been prioritized?
Beyond data access, Article 10 of the AI Act mandates that high-risk AI systems be trained on datasets that are “free of errors” and “representative”. But who is responsible for ensuring EHDS data meets these requirements? AI developers? EHDS governing bodies? Member state regulators? Without defined oversight, compliance remains vague.
The AI Act also requires explainability and human oversight in AI-assisted decision-making. Article 14 mandates transparency, but EHDS does not specify how AI-driven healthcare decisions will be explained to patients and physicians. If an AI system trained on EHDS data misdiagnoses a patient, who is liable? The physician? The AI developer? The EHDS authority? Without clear liability frameworks, legal risks could discourage AI adoption in healthcare entirely.

The Need for Harmonization
For AI-driven healthcare to function safely and effectively, the EHDS and AI Act must work in alignment. The EHDS must be explicitly recognized as a legally valid data source for AI training, removing ambiguity. The AI Act must define clear, enforceable data quality standards, ensuring EHDS datasets automatically meet compliance requirements, while eliminating cross-border inconsistencies.
Governance must be well-defined. The EHDS should establish certification mechanisms verifying that its datasets comply with the AI Act’s high-risk data standards. AI transparency requirements must be integrated into EHDS regulations, ensuring physicians and patients understand how AI-driven decisions are made.
Liability must be addressed. If AI-driven diagnoses or treatment recommendations go wrong, the legal responsibility of developers, healthcare providers, and data authorities must be clarified. The AI Act’s current approach to liability remains vague, leaving healthcare professionals exposed to unresolved legal risks.
The future, if the gaps are closed
The EU has an opportunity to lead the world in AI-driven healthcare, but not if its regulatory framework remains fragmented. The EHDS was meant to be a cornerstone of digital health transformation—but if it isn’t fully integrated into the AI Act, it risks being an underutilized asset.
If these gaps are addressed, the EHDS could become a model for secure, standardized AI-driven healthcare. If not, AI innovation in Europe may stall under the weight of regulatory uncertainty. Europe has taken the first step with the EHDS. Now it must ensure that AI can walk through the door.