SDEs for AI Healthcare Technology

Optimising SDEs for AI Health Technologies

Projects

Delivered by our Network partners: University Hospitals Birmingham and University of Birmingham

Funders: NHS England (Data for R&D Programme) and Office for Life Sciences


This project explores how Secure Data Environments can be optimised to support AI health technology development. Led by UHB and the University of Birmingham, it balances innovation, privacy and governance, proposing technical and policy improvements to enable safe, trusted NHS data access.

SDEs for AI Healthcare Technology

Mission & Vision

This project aims to ensure that Secure Data Environments (SDEs) evolve to support responsible, efficient, and innovation-friendly development of AI health technologies while safeguarding patient privacy and public trust. Our vision is a future-ready SDE ecosystem that balances security with functionality, enabling frictionless access to high-quality NHS data for AI innovation, robust governance for emerging risks such as model leakage, and transparent engagement with the public to maintain confidence in data use.

Project Outputs

Through two national workshops, targeted interviews and broad engagement across industry, academia, regulation and the NHS, the project has produced the following outputs:

  • A comprehensive report setting out the current challenges and opportunities for enabling AI health technologies within SDEs
  • A set of practical recommendations to strengthen technical capability, streamline processes, and support consistent governance across the SDE network
  • An evidence‑informed analysis of model egress and disclosure risk, highlighting the need for clearer frameworks and further research
  • Insights from patient and public contributors, emphasising transparency, proportionality and clear articulation of public benefit

Together, these outputs provide a foundation for the next phase of SDE development and for ensuring that AI technologies can be developed in ways that are safe, efficient and trusted.

What is the project about?

SDEs play a central role in UK health data policy by providing controlled access to de-identified NHS datasets. However, current SDEs are often not optimised for commercial AI innovation, they can be slow, costly, and restrictive. This project explores how SDEs can be enhanced to support the full AI product lifecycle, from early-stage experimentation using synthetic data to pre-market development and post-market surveillance. It addresses both technical requirements (e.g., scalable compute, smooth ingress and egress of data and code) and governance challenges (e.g., mitigating disclosure risk from model outputs), aiming to create environments that are secure, functional, and trusted by users and the public alike.

Healthcare Technology Manufacturing

Why is this project commissioned?

AI health technologies require access to large, high-quality datasets for training, validation, and ongoing monitoring. While SDEs were introduced to enhance data access and public trust, their current design limits innovation, potentially reducing the UK’s competitiveness in AI healthtech. This project responds to that challenge by identifying practical solutions and governance frameworks that enable innovation without compromising privacy, security, or public confidence. By aligning SDE design with industry and regulatory needs, the project supports the safe and effective development of AI health technologies within the UK.

Who are the intended users?

  • AI HealthTech innovators: SMEs and larger manufacturers developing regulated AI health technologies.
  • Regulators and policymakers: MHRA, NHS England, and other governance bodies shaping SDE standards, including those designing the new Health Data Research System (HDRS).
  • SDE network and technical leads: Responsible for implementing secure, scalable and usable environments.
  • Public and patient representatives: Ensuring transparency, trust, and engagement in data use for AI innovation.

How are we making this project a reality?

  • Expert workshops: Conducting national workshops to capture industry needs and public/governance perspectives.
  • Technical proposals: Developing design objectives for an AI innovation platform within SDEs.
  • Risk framework: Co-developing a pan-SDE model risk assessment framework to manage disclosure risk.
  • Policy alignment: Collaborating with DARE UK and regulators to standardise guidance, best practices, and governance for AI healthtech innovation.

Healthcare Technology Workshops

Aim &
Objective

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam.

Patient Trials

124

Project Active
Since

2022

Key
Achievemnts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Utility Scale
Efficency

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Millions

8M

Research &
Results

1320

International
Activities

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Need to know

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.


Regulation of AI as Medical Tech

Qualification and Classification of Large Language Models

Projects

Delivered by our Network Partners: University of Birmingham and University Hospitals Birmingham

Funded by: The Health Foundation


This project clarifies when large language models should qualify as medical devices in healthcare. Led by the University of Birmingham and UHB, it defines LLM use cases, risk classification and regulatory guidance to support safe, accountable adoption and inform UK and international regulation.

Regulation of AI as Medical Tech

Mission & Vision

This project aims to bring clarity and confidence to the regulation of large language models (LLMs) in healthcare by determining whether, when, and how they should qualify as medical devices. Our goal is to ensure these powerful tools are adopted safely, effectively and responsibly, with appropriate safeguards to protect patient safety and maintain public trust. By aligning emerging evidence with existing regulatory frameworks, we envision a future where healthcare organisations, developers and regulators have clear, practical guidance on LLM risk classification, enabling innovation while upholding the highest standards of safety, fairness and accountability.

What is the project about?

This project explores how LLMs should be assessed for qualification as a medical device and risk classified when used for medical purposes. It identifies common healthcare use cases for LLMs, develops evidence-based archetypal intended-use statements, and applies established regulatory principles to determine whether those uses fall within the definition of a medical device. The work will identify the characteristics that influence risk classification and produce policy recommendations for UK regulators (MHRA, NICE, CQC) and international bodies. It also aims to guide the future regulatory landscape by clarifying how LLM-based medical use cases should be assessed and governed, outlining what an optimal future approach to regulating these technologies should entail, and ultimately creating clarity on how LLMs fit within medical device regulation what that means for their safe and effective use.

Classification of LLMs
Classification of Medical GPT4

Why is this project commissioned?

LLMs such as GPT-4 are already being used in healthcare for multiple tasks including clinical documentation, decision support and patient communication. However, we have found that there is currently uncertainty even amongst regulatory experts as to when general-purpose LLMs should qualify as medical devices, creating a significant regulatory gap with potential implications for patient safety, transparency and accountability. This project aims to address that gap by aligning emerging evidence with regulatory frameworks, supporting safe, equitable and appropriately governed adoption of LLMs in clinical practice.

Who are the intended users?

  • Regulators and policymakers: MHRA, NICE, CQC, NHS England, and international partners (FDA, Health Canada, TGA).
  • Healthcare organisations: NHS trusts and providers evaluating or deploying LLM-based tools.
  • Developers and industry stakeholders: Companies building LLM applications for clinical or operational use.
  • Clinical, digital and governance leaders: Those responsible for risk assessment, safety oversight and AI adoption strategies.

How are we making this project a reality?

  • Evidence synthesis: Conducting a rapid scoping review of biomedical literature to identify healthcare applications of LLMs.
  • Stakeholder engagement: Hosting multi-stakeholder workshops with regulators, clinicians, legal experts and industry to refine intended-use statements and explore risk classification; identify gaps and inconsistencies in how current regulations are applied to LLMs in healthcare; clarify where uncertainty persists; and map how best to shape the regulatory landscape going forward.
  • Policy development: Drafting clear, actionable recommendations for MHRA and international regulators.
  • Public involvement: Running a PPIE workshop to gather patient and public perspectives on safety, fairness and accountability in LLM deployment.
    Dissemination: Publishing the scoping review, policy report and workshop outputs, with UK-wide and international dissemination via CERSI-AI and the NIHR Incubator for AI and Digital Health.

Review of Biomedical literature

Aim &
Objective

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam.

Patient Trials

124

Project Active
Since

2022

Key
Achievemnts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Utility Scale
Efficency

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Millions

8M

Research &
Results

1320

International
Activities

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Need to know

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.


Ai Readiness Checklist

The AI Readiness Checklist

Projects

Delivered by network partner: University of Birmingham and University Hospitals Birmingham


The AI Readiness Checklist is a practical self-assessment tool helping UK healthcare organisations prepare for safe, effective and equitable AI adoption. Funded by NHS England and the Health Foundation, it evaluates data, governance, workforce and ethics readiness to support deployment.

Ai Readiness Checklist

Mission & Vision

The AI Readiness Checklist aims to support safe, effective and equitable AI adoption across the UK health and care system. Its mission is to define the minimum organisational capabilities required before deploying AI, ensuring that providers have the right data foundations, governance processes, clinical oversight and workforce skills in place. The vision is that all healthcare organisations become confidently “AI-ready” — able to adopt, evaluate and scale AI in ways that improve outcomes, uphold patient safety, and support responsible innovation.

What is the project about?

The AI Readiness Checklist is a practical self-assessment and diagnostic tool designed for healthcare organisations preparing to implement AI technologies. It provides a structured way to evaluate readiness across key domains such as data and digital infrastructure, governance and assurance, workforce capability, ethics and equity considerations, and organisational strategy. By completing the checklist, organisations can identify strengths, highlight gaps, and prioritise where to focus their improvement efforts, supporting more successful, safe and sustainable AI adoption.

AI Readiness Group Discusssion
AI NHS Readiness

Why is this project commissioned?

The AI Readiness Checklist has been commissioned to help ensure that AI technologies are adopted safely, effectively and equitably across UK healthcare. Funded by The Health Foundation and NHS England, the project responds to the growing need for organisations to have proportional governance, robust infrastructure and a prepared workforce before implementing AI. CERSI-AI is supporting the initiative as part of its core mission to advance safe, trustworthy and evidence-based AI deployment in health and care. By establishing a clear, practical approach for “AI readiness,” the project aims to reduce variation across NHS trusts, prevent avoidable risks, and enable AI innovations to scale in a way that delivers real value for patients and the wider system.

Who are the intended users?

  • Healthcare organisations: NHS trusts, hospitals, and other health-care providers considering or planning to adopt AI technologies.
  • Digital leaders, clinical leads, information governance and data teams, those responsible for assessing readiness for AI, designing strategy, or overseeing adoption.
  • Multi-disciplinary teams involved in innovation: IT, data, clinical, governance, management, anyone who needs to understand whether the organisation has the right foundations to adopt AI.
  • Policy-makers and regulators interested in guiding safe, responsible and equitable AI deployment across the healthcare system.

How are we making this project a reality?

We are making this project a reality through extensive research, engagement and real-world collaboration. The checklist has been developed through desk research, expert interviews and stakeholder consultation to ensure it reflects the practical needs of NHS organisations. We are piloting the tool with NHS trusts to gather feedback and refine its usefulness in live settings. In addition, we are working closely with devolved nations, including running a joint workshop with the Scottish Government to explore how the checklist aligns with NHS Scotland structures and priorities. We are also beginning to test how the tool could support primary care and social care settings, recognising that AI adoption is expanding beyond acute providers. By openly publishing the tool on aireadiness.uk and aligning it with national guidance and regulatory expectations, we aim to create a consistent, practical resource that supports safe, effective and scalable AI adoption across the whole health and care system.

NHS Scotland

Aim &
Objective

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam.

Patient Trials

124

Project Active
Since

2022

Key
Achievemnts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Utility Scale
Efficency

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Millions

8M

Research &
Results

1320

International
Activities

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Need to know

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.


Hardian Health AI

Hardian Regulatory Intelligence Platform (HaRi)

Projects

Delivered by network partner: Hardian Health


The Hardian Regulatory Intelligence Platform (HaRi) brings together global medical device regulatory and safety data into one connected system. Developed by Hardian Health, it supports faster, safer, and more transparent decision-making for regulators, healthcare professionals, researchers, and patients worldwide.

Hardian Health AI

Mission & Vision

Today, information on medical devices, including device registrations, safety data, and efficacy, is fragmented across numerous portals and often presented without context. This is especially challenging for software and AI-based medical devices, which are frequently approved in multiple jurisdictions. HaRi addresses this gap by unifying global regulatory data to provide a clear, connected view of authorised medical devices.

What is the project about?

HaRi aggregates device registrations, safety data, NHS procurement information, and peer-reviewed evidence from sources like Public Access Registration Data Base (PARD), UK, Food and Drug Administration (FDA), USA, The European Database on Medical Devices (EUDAMED), and PubMed, enabling searches across the UK, Europe, USA, Canada, and Australia.

Why is this project commissioned?

By unifying and contextualising regulatory intelligence, HaRi empowers safer, faster, and more transparent decision-making across the medical device ecosystem.

Hardian regulatory Platform
Hadrian Regulation Conference1

Who are the intended users?

HaRi is designed for regulatory professionals, healthcare providers, researchers, procurement teams, regulators, and importantly, patients, providing a comprehensive, accurate, and up-to-date global view of the devices used every day.

How are we making this project a reality?

Led by Hardian Health, HaRi is continuously being updated with new data sources, enabling searches across the UK, Europe, USA, Canada, and Australia. In the long term, HaRi aims to support proactive regulatory oversight by providing early-warning indicators of emerging safety issues, and helping innovators understand the global regulatory landscape before market entry.

HaRi is currently in Beta testing - you can explore the tool here:

Aim &
Objective

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam.

Patient Trials

124

Project Active
Since

2022

Key
Achievemnts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Utility Scale
Efficency

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Millions

8M

Research &
Results

1320

International
Activities

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Need to know

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.


Borderline Manual

Borderline Manual for Software as a Medical Device (SaMD)/ AI as a Medical Device (AIaMD)

Projects

Delivered by network partner: Hardian Health and University of Birmingham


The Borderline Manual for SaMD and AIaMD provides practical UK guidance on classifying AI health technologies as medical devices. Developed by the University of Birmingham and Hardian Health, it clarifies regulatory status, risk classification and compliance pathways to support safe, accelerated innovation.

Borderline Manual

Mission & Vision

The mission of this project is to provide clear, practical guidance for innovators navigating the regulatory classification of AI health technologies. Our vision is a harmonised, transparent framework that enables safe and compliant innovation while accelerating development timelines and protecting patient safety.

What is the project about?

We are developing the UK’s first borderline manual specifically for AI as a medical device (AIaMD) and Software as a Medical Device (SAMD). This guidance document addresses the critical question facing AI health innovators: “Is my product a medical device, and if so, what risk class does it fall under?”

The manual aims to provide detailed case studies across multiple clinical domains and each case study demonstrates the decision-making process for both qualification and classification.
Outputs will include consensus recommendations, illustrative examples, and guidance for innovators and regulators.

Borderline Manual for Medical Devices
AI Controlled Bionics

Why is this project commissioned?

Innovators often face uncertainty when determining whether an AI health technology qualifies as a medical device, which can delay development and slow adoption. Clear guidance is needed to improve compliance, reduce risk, and accelerate safe innovation. By addressing these “borderline” cases, the project ensures regulators, developers, and healthcare organisations can align expectations and make informed decisions.

Who are the intended users?

  • Primary: AI developers, health technology innovators, and medical device manufacturers determining regulatory pathways for software products.
  • Secondary: Regulators (MHRA and approved bodies), healthcare organisations evaluating AI technologies, legal and compliance teams, and policy makers.

How are we making this project a reality?

The manual uses a systematic approach making it easy for manufacturers to find relevant guidance for their specific product type. We have developed archetypal use cases that demonstrate classification progressions, showing how subtle changes in intended use or functionality can shift regulatory status. Multi-stakeholder workshops will be conducted with innovators, clinicians, regulators, and legal experts to build consensus on classification and regulatory approaches. Outputs will be harmonised with the existing EU borderline manual to avoid conflict and confusion, and will be formally submitted to the MHRA as recommendations for official guidance. The manual aims to provide clear, practical advice to innovators on borderline cases, helping them plan development pathways, meet compliance requirements, and accelerate safe adoption of AI health technologies.

Multi-Specialist Workshop

Aim &
Objective

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam.

Patient Trials

124

Project Active
Since

2022

Key
Achievemnts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Utility Scale
Efficency

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Millions

8M

Research &
Results

1320

International
Activities

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Need to know

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. In pulvinar vitae lectus vitae aliquam. Nullam neque dui, semper eu dolor in, ornare faucibus ipsum.


Privacy Preference Center