Global policy
2025-01-27
Good machine learning practice for medical device development: Guiding principles
Introduction
Artificial intelligence (AI) technologies, including machine learning, have the potential to transform health care by deriving new and important insights from the vast amount of data generated in health care every day. They use algorithms that can learn from real- world use and potentially use this information to improve the product’s performance. But they also present unique considerations due to the iterative and data-driven nature of their development. This document establishes a common set of principles for the community to promote the development of safe, effective, and high-quality medical devices that incorporate AI. These principles are important to apply across the lifecycle of the medical device.
The 10 guiding principles for Good Machine Learning Practice (GMLP) presented in this document are a call to action to international standards organizations, international regulators, and other collaborative bodies to further advance GMLP. Areas of collaboration include research, creating educational tools and resources, international harmonization, and consensus standards, to inform regulatory policies and regulatory guidelines. These guiding principles may be used to adopt practices from other sectors, tailor them to the medical technology and healthcare, and to develop novel practices for this domain.
2025-01-27
Characterization considerations for Medical Device Software and Software-Specific Risk
Introduction
Software’s role in healthcare is becoming increasingly critical, as a diverse array of products serves various medical and administrative functions across clinical and private settings. A subset of software that is used in healthcare is regulated as a medical device globally by regulatory authorities. In 2013, the International Medical Devices Regulators Forum (IMDRF) introduced the concept of Software as a Medical Device (SaMD) and subsequently proposed a possible risk categorization framework (IMDRF/SaMD WG/N12 FINAL:2014). Building on the collective experience of its members, the IMDRF SaMD Working Group (WG) now has an opportunity to add to those initial concepts by providing guidance related to device characterization and risk characterization, for a broadened scope of medical device software. In the context of this document, risk characterization is meant to help identify potential harms associated with a medical device software and is based on a careful review of device characterization. Risk characterization can help to develop a robust understanding of the overall risk of the device and can be helpful as an input to risk assessment and management activities or as an input to risk categorization and device classification. This new document is intended to focus on characterization and can supplement categorization/classification frameworks (e.g., N12 and other legally defined classification schemes across jurisdictions) by providing additional considerations on medical device software and related risk characterization.
2025-01-29
International AI Safety Report
Executive Summary
This report summarises the scientific evidence on the safety of general-purpose AI. The purpose of this report is to help create a shared international understanding of risks from advanced AI and how they can be mitigated. To achieve this, this report focuses on general-purpose AI – or AI that can perform a wide variety of tasks – since this type of AI has advanced particularly rapidly in recent years and has been deployed widely by technology companies for a range of consumer and business purposes. The report synthesises the state of scientific understanding of general-purpose AI, with a focus on understanding and managing its risks. Amid rapid advancements, research on general-purpose AI is currently in a time of scientific discovery, and – in many cases – is not yet settled science. The report provides a snapshot of the current scientific understanding of general-purpose AI and its risks. This includes identifying areas of scientific consensus and areas where there are different views or gaps in the current scientific understanding.
People around the world will only be able to fully enjoy the potential benefits of general-purpose AI safely if its risks are appropriately managed. This report focuses on identifying those risks and evaluating technical methods for assessing and mitigating them, including ways that general-purpose AI itself can be used to mitigate risks. It does not aim to comprehensively assess all possible societal impacts of general-purpose AI. Most notably, the current and potential future benefits of general-purpose AI – although they are vast – are beyond this report’s scope. Holistic policymaking requires considering both the potential benefits of general-purpose AI and the risks covered in this report. It also requires taking into account that other types of AI have different risk/benefit profiles compared to current general-purpose AI.
The three main sections of the report summarise the scientific evidence on three core questions: What can general-purpose AI do? What are risks associated with general-purpose AI? And what mitigation techniques are there against these risks?