Ambient Voice Technologies in the NHS

Establishing a public dialogue on Ambient Voice Technologies in the NHS

2nd April 2026

 Listening to the public about how AI should support their care is at the heart of CERSI‑AI’s mission. That’s why we funded new work to understand how patients feel about Ambient Voice Technology (AVT), AI tools that can listen to healthcare conversations. 

Robin Carpenter from our partner Newton’s Tree has been leading a representative public dialogue to explore the public’s hopes, concerns, and expectations of AVTs. In the blog below, he shares why this matters, and the work to date on this project which aims to elevate the patient voice in the future of AI in the NHS.

Who asked you?

Ambient Voice Technology (AVT), which is AI that listens to conversations to transcribe and summarise them, appeared in healthcare very quickly. This was arguably driven by their value: potentially helping an overwhelmed workforce manage their cognitive load and reducing their burnout. As you would expect, the healthcare ecosystem responded to their rapid emergence by focusing on their safety. National bodies focused on their regulation and managing the known issues like omitting or fabricating information. However, as national bodies, the NHS, and AI vendors collaborated on making this work, someone important seemed to be missing from the conversation:

The patients.

It seemed like there was an important public dialogue to be had about introducing a new third party that will record some of our most intimate conversations and confessions.

Laying the foundations

So, I sought funding from CERSI-AI to set-up a public dialogue to address this. In preparation I spoke to pretty much every leading UK expert on medical AVTs to make sure my idea held water.

What struck me was how much this idea -having a robust public dialogue on AVTs- resonated with people. In calls individuals shared concerns about how their loved ones would be impacted by AVTs, fears that immigration status information would move beyond the clinical team, or questions about how well anybody really understood what they may be ‘consenting’ to.

I submitted a proposal to CERSI-AI and I was lucky enough to be awarded a grant. This grant was paramount because doing a public dialogue well is not cheap, and I wanted this to be done well.

The whole village came together

I can’t stress this enough: doing a public dialogue well takes a village. After speaking to several highly regarded organisations to deliver the public dialogue -to ensure it maintains independence- I chose an independent organisation named Hopkins Van Mil. I chose them for several reasons. They are very experienced, they really focus on their ethical support of participants, and they have delivered several impactful public dialogues on hard hitting topics from digital IDs to assisted dying.

We then needed to establish a governance board to oversee the whole project, and I reached out to leaders in our community, and they said yes! It is composed of people from leading think tanks, charities, and the public sector.

Next, in preparation for the public dialogue, we needed to create education materials for the participants. Again, I reached out to the leaders in the UK healthcare AI space -people I honestly thought would have no time to even read my email- and they said yes, as well!

The very best in the UK healthcare AI community came together to give the members of the public everything they would need to have a robust conversation.

I am overwhelmed with gratitude for their support.

The big day

After months of work, the public dialogue took place in late March, and it may have been the best day of my career. Everything we have been working on for months came together for a robust, honest, and practical conversation. We also had AVT policy writers from the public sector observe the event so they could hear the public’s thoughts first hand.

Hopkins Van Mil has now begun their analysis, and we should have early findings soon. In the meantime, I am working with some groups to help ensure the patient’s voice is heard.

The village needs you

I am working hard to make sure the patient’s voice is loud, speaking to politicians, journalists, and civil servants to name a few groups. But, the patient’s voice can always be louder.

If you know any journalists, think tanks, civil servants, or parliamentarians that would be interested in supporting the patient’s voice, and you are happy to introduce me, then please email me at: avtpublicdialogue@newtonstree.com


Robin Carpenter - Profile

Robin Carpenter

Head of AI Governance and policy at Newton’s Tree

Robin Carpenter is the Head of AI Governance and policy at Newton’s Tree, where he focuses on developing best practice in evaluation, deployment, and monitoring of healthcare AI. He is also Visiting Lecturer at King’s College London, an Honorary Research Fellow at the University of Birmingham, on the oversight committee of several large health tech conferences, and advises locally and nationally on healthcare AI.


Interview with Alicja Rudnicka on Automated Retinal image Analysis Systems

27th November 2025

 Recently, we spoke with Alicja Rudnicka, lead author of a groundbreaking study on automated retinal image analysis systems (ARIAS), which sets a new standard for transparency and fairness in medical AI evaluation. 

Q: What problem were you aiming to address with this research?

Most current evaluations of artificial intelligence (AI) / automated retinal image analysis systems (ARIAS) are carried out by the vendors themselves, often using datasets and testing environments of their own choosing. These single-algorithm, vendor-delivered studies tend to overestimate performance and rarely reflect how systems might behave in real-world clinical settings. Differences in patient populations, image capture systems, and evaluation protocols make it nearly impossible to compare algorithms fairly particularly when key details like pre-selection or preprocessing of images are unclear. This approach limits transparency, comparability, and ultimately, trust in AI-driven healthcare tools.

Q: How did your study take a different approach?

Recognising these shortcomings, this study took a fundamentally different path. It created a vendor-independent evaluation platform using the largest, most ethnically diverse NHS Diabetic Eye Screening dataset in North-East London. Multiple state-of-the-art ARIAS each certified as a medical device (with/pending CE Class IIa) were assessed on the same dataset, under identical computational conditions. This ensured fair, direct, and reproducible comparisons while also testing for algorithmic fairness across diverse population subgroups (e.g. ethnicity and age). The result is a transparent, sustainable model for evaluating medical AI systems that mirrors real-world use.

Q: What are the implications for policy and future practice?

This work sets a new benchmark for independent, population-based evaluation of AI in healthcare. Demonstrating how multi-vendor comparisons can be done impartially and at scale, in the intended healthcare setting. It provides a blueprint for regulators, policymakers, and health systems to demand higher standards of evidence before deployment. It provides a more predictable adoption pathway for manufacturers. Beyond eye screening, the same principles could inform policy frameworks for AI evaluation across other disease areas, helping ensure that future health AI technologies are not only effective and safe for patients but also equitable and trustworthy.

Read the full research paper here: Automated retinal image analysis systems to triage for grading of diabetic retinopathy: a large-scale, open-label, national screening programme in England – The Lancet Digital Health

This study exemplifies the type of rigorous, independent research that CERSI-AI champions. By setting new standards for transparency and fairness in AI evaluation, it helps pave the way for safer, more equitable adoption of digital health technologies.

Want to learn more? Explore our latest projects, policy insights, and collaborative opportunities at CERSI-AI and join us in shaping the future of trustworthy AI in healthcare.

Professor Alicja Rudnicka

Professor Alicja Rudnicka is a Professor in Statistical Epidemiology in the Population Health Research Institute. Professor Rudnicka has been involved in a wide spectrum of epidemiological enquiry including large-scale population-based studies and the application of artificial intelligence (AI) technology for analysing retinal images for risk prediction and disease detection

Professor Adnan Tufail

Professor Adnan Tufail is a consultant ophthalmologist in the Medical Retina Service at Moorfields Eye Hospital, London, with special expertise in medical and inflammatory diseases of the retina and choroid. He is a Professor of Ophthalmology at University College London with extensive clinical and research experience. Professor Tufail is also a highly experienced cataract surgeon with expertise in the complex management of patients with cataract and the above retinal conditions.


Impact and Sustainability Workshop

Key insight from CERSI-AI’s Impact and Sustainability Workshop

28th August 2025

On 18th June 2025, we hosted our CERSI-AI Impact and Sustainability Workshop. Led by Alastair Denniston and Peter Bannister, the workshop brought together regulators, clinicians, SME manufacturers, approved bodies, and patients to address a critical challenge: how we can build regulatory frameworks that both protect patients and enable breakthrough innovations to reach those who need them.

Highlights included:

  • Stakeholder prioritisation sessions revealing critical regulatory needs which CERSI-AI can address.
  • A showcase of HaRi – Hardian Health‘s Regulatory Intelligence database for AI-enabled medical device transparency.
  • International insights from Joseph Ross (Yale University/Mayo Clinic CERSI).

The discussions reinforced that sustainable AI regulation requires collaboration across the entire healthcare ecosystem. Over the coming weeks, we’ll be sharing key insights and recommendations from these important conversations. A huge thank you to all speakers, facilitators, and attendees for your energy, ideas, and commitment. We look forward to continuing the journey together.


Alastair Denniston Filming

Professor Alastair Denniston filmed with The Lancet Group

28th August 2025

Recording will be available in October.

CERSI-AI Director Professor Alastair Denniston has been filming with The Lancet Group, discussing how AI and machine learning are already transforming healthcare and research.

It’s a fascinating conversation about where we are now, and what’s next. We can’t wait to share the full recording with you this September!


AI Eye Screening Research

How do we ensure safe, effective implementation of AI for diabetic eye screening in the NHS?

28th August 2025

Earlier this summer, we brought together a group of experts for a focused workshop on evidence for ARIAS (Automated Retinal Image Analysis Software) in the English NHS Diabetic Eye Screening Programme (DESP).

Co-hosted by the ARIAS Research Group, the NIHR Incubator for AI and Digital Healthcare

  • Current evidence on human grader QA standards
  • Results from a recent head-to-head evaluation of multiple ARIAS tools
  • Key requirements to support safe, evidence-informed implementation in the NHS

Our goal was to share insights on performance, safety, and regulatory needs, and to collaboratively shape early recommendations for implementation pathways.

Thank you to everyone who contributed to such a constructive and impactful discussion, we’re excited about the next steps!

Images and LinkedIn post here: https://www.linkedin.com/posts/cersi-uk_aiinhealthcare-nhsinnovation-diabeticeyescreening-activity-7357041126198894594-LYpK?utm_source=share&utm_medium=member_desktop&rcm=ACoAAFRfVxcBHqGcyb7Pe-QXQWWRVZ1T7jiZA-U


LLM Medical Device

When does a large language model (LLM) become a medical device?

28th August 2025

When does a large language model (LLM) become a medical device?That critical question shaped a series of insightful discussions at our recent expert workshop, part of the Qualification and Risk Classification of LLMs project led by the AI & Digital Health Group at the University of Birmingham.

Funded by The Health Foundation and supported by CERSI-AI, we brought together regulatory experts from the Medicines and Healthcare products Regulatory Agency, notified bodies, and AI verification specialists to explore one of digital health’s most urgent challenges.

Together, we analysed real-world LLM use cases across healthcare, mapped regulatory pathways, and assessed device classifications and risk levels under existing frameworks. The conversations highlighted both the practical hurdles and exciting opportunities facing innovators and regulators alike. A huge thank you to everyone who attended and contributed to the discussions, and to our brilliant team for helping to shape such a thought-provoking and productive day.

This work reflects our commitment to enabling smarter, more adaptive regulation that supports safe and effective innovation in digital health – a commitment we’ll continue advancing through our ongoing collaborative workshops.


NHS AI Ready

What does it take for NHS organisations to be truly AI ready?

28th August 2025

On 11th July 2025 at the Wellcome Trust in London, CERSI-AI was proud to support a high-energy, expert-led workshop exploring exactly that. We welcomed policy makers, clinicians, public representatives, and digital leaders to define what “AI readiness” looks like in practice.

Hosted by the University of Birmingham and University Hospitals Birmingham NHS Foundation Trust‘s AI and Digital Health Research & Policy group, with support from NHS England and The Health Foundation, the workshop focused on four core aims:

  • Characterise the organisational risks and opportunities of AI adoption
  • Develop a readiness framework grounded in real-world NHS contexts
  • Co-design practical tools to help provider organisations assess and build readiness
  • Signpost existing tools, frameworks and evidence that can support NHS organisations on their AI journey

The day included implementation case studies, collaborative risk-mapping, and lively group discussions on what success should look like, all rooted in NHS realities and future ambitions.

A huge thank you to our Chairs Jeff Hogg and Alastair Denniston, our brilliant speakers Robin Carpenter, Andy Mayne and Kevin Percival who shared valuable lessons from across the system, and to everyone who contributed their time, insight and expertise.

This work is a vital step toward supporting safe, responsible and scalable AI adoption in the NHS. We’re looking forward to sharing the project’s outputs as they develop.


Cersi-AI Announced

CERSI-AI announced by Lord Vallance, Science Minister

28th August 2025

CERSI-AI was announced today by Lord Vallance, Science Minister and formerly Chief Scientific Adviser to the UK Government. Lord Vallance noted “New technologies are transforming our economy at rapid pace.

“The system of regulation must keep up with that, so that we can quickly and safely seize the economic and social benefits that new innovations could unlock…that is why we are launching CERSIs. They will make a valuable contribution to regulatory innovation – and will complement wider efforts to make the UK’s regulation fit for the future…”

Prof Denniston, Executive Director of CERSI-AI, said “Working across the health and technology ecosystem, the Centre will identify and address current and future needs and opportunities in the regulation of AI and digital healthcare products and services … Key to the Centre’s role will be balancing the needs of innovators, such as speed and market certainty, with those of the end-users, such as cost-effectiveness, safety, equity and sustainability, to ensure resulting technologies are able to truly improve people’s lives.”


Cersi-AI Awarded to Birmingham

New national Centre of Excellence in AI and digital health awarded to Birmingham

28th August 2025

On 28 January 2025, the University of Birmingham was selected to host the Centre of Excellence for Regulatory Science and Innovation in AI & Digital Health Technologies (CERSI-AI), backed by a £1 million award to drive safe and effective innovation in digital healthcare University of BirminghamUK  Research and Innovation.

CERSI-AI brings together a distinguished consortium of founding partners—including the University of York; industry innovators Hardian Health, Newton’s Tree, Romilly Life Sciences, and the Association of British HealthTech Industries; and NHS organisations like University Hospitals Birmingham NHS Foundation Trust and NHS Greater Glasgow and Clyde University of Birmingham, newtonstree.ai & birminghamhealthpartners.co.uk.

Guided by Executive Director Professor Alastair Denniston, the Centre aims to balance the pace of innovation with rigorous regulatory science—ensuring technologies are safe, effective, equitable, and sustainable, while supporting innovators with rapid development and regulatory clarity University of BirminghamUK Research and Innovation.

Part of a wider UK initiative, CERSI-AI is one of seven new CERSIs funded through a collaborative effort led by Innovate UK, the MHRA, the Office for Life Sciences, and the Medical Research Council. Collectively, they will generate tools, frameworks, and guidance to streamline the pathway for medical innovation in areas ranging from advanced therapies to digital health UK Research and Innovation University of Birmingham.

Full article here: New national Centre of Excellence in AI and digital health awarded to Birmingham – University of Birmingham


Privacy Preference Center