
Physiology News Magazine
Policy Focus
Physiology at the heart of AI safety
News and Views
Policy Focus
Physiology at the heart of AI safety
News and Views
https://doi.org/10.36866/pn.132.8
Andrew Mackenzie
Associate Director of Strategy and External Relations, The Physiological Society
Artificial intelligence (AI) has the potential to revolutionise healthcare by significantly improving disease detection and prevention. However, The Physiological Society is concerned that AI healthcare tools are being developed, approved, and adopted without sufficient physiological input. This should be a key priority as the Government considers the outputs of its recent AI Safety Summit at Bletchley Park.
AI will play a pivotal role in the ambition to transition the NHS from what is often more akin to a “national sickness service” to a model focused on preserving good health by leveraging innovation and technology for early disease detection. This shift will support a move towards a more integrated, whole-person approach to care and facilitate timely interventions in order to alleviate pressure on overburdened primary and secondary care, and enable individuals to enjoy healthier lives for longer.
However, the successful implementation of AI tools in healthcare is not without its challenges and risks. From inaccurate diagnoses to perpetuating existing health inequalities through biased data and access, it is crucial to carefully develop, adopt and monitor AI tools to prevent potential harm.

To explore current perspectives on the role of physiology in developing and using AI tools within the UK health system, The Physiological Society collaborated with over 30 experts across both the AI and life science fields, in June 2023, to consider the opportunities and risks. The resulting report, “From ‘Black Box’ to Trusted Healthcare Tools: Physiology’s role in unlocking the potential of AI for health” underscored the importance of incorporating physiological measurements and insights into the development of AI tools, as well as fostering a research ecosystem that leverages physiological understanding rather than dealing with siloed areas of specialism.
The limited inclusion of physiological evidence in the development of AI tools can lead to reduced trust, challenges with applicability and, at worst, to the identification of spurious correlations without sufficient physiological plausibility and ultimately harm to patients.
Our report presents a set of recommended actions for the Government, the NHS, research funders and other stakeholders to utilise physiology as a “guardrail” when developing AI in health, in order to maximise the benefits while minimising risks. Integrating physiological knowledge into relevant AI models and systems can enhance the understanding and interpretation of complex health data, ultimately leading to better-informed decision-making.
Our recommended actions highlight the critical role that physiologists and physiology play in underpinning effective AI tools by ensuring model plausibility, assessing relevant training datasets and improving the interrelationship between biomedical understanding, machine-learning systems and clinical expertise.
To harness the full potential of AI in healthcare and ensure its safe and effective implementation, we believe it would be beneficial to adopt a comprehensive approach based on the establishment of a Physiology & AI Framework, the prioritisation of physiological plausibility in research funding mechanisms and the inclusion of physiological evidence in the regulatory approval process.
Our report made three core recommendations that, together, outline a strategic roadmap for achieving these goals:
Action 1: Establish a Physiology & AI Framework to set improved guardrails for AI in health
To ensure that AI tools in healthcare are not only safe, but also accepted by their intended users and beneficiaries, The Physiological Society is coordinating efforts to establish a Physiology & AI Framework, working with stakeholders across healthcare services, research and AI.
The framework will set out principles for physiologically plausible technologies, improve dialogue and knowledge transfer between stakeholders and establish training programmes. This will help establish guardrails around AI and health by identifying how to integrate physiological measurements and expertise into technology development and testing, prior to deployment in healthcare settings.
Our ambition is that the framework will form the platform and evidence base on which regulators, funding organisations and policymakers make decisions on effective implementation of trustworthy AI health tools. The framework will include three key elements:
- The development and adoption of a set of principles and success criteria that describe physiologically plausible technological applications, to help lift the lid on the “black box” that is AI.
- A forum to regularly bring physiologists together with other key stakeholders, to achieve a shared understanding of physiological plausibility, opportunities and risks of AI tools in healthcare.
- A training programme for physiologists, developers and data scientists, to create a shared language and understanding to build physiologically plausible technology by design.
Action 2: Ensure that research funding mechanisms prioritise physiologically plausible AI tools
We recommend that research funders should review the governance of funding mechanisms that concern the use of AI in healthcare. Where relevant, physiological plausibility should be included as a key decision factor in assessing the quality of grant proposals, and research teams should include physiological expertise.
Action 3: Embed physiological evidence in the regulatory approval of AI tools
We recommend that the Medicines and Healthcare Products Regulatory Agency (MHRA), in partnership with other relevant regulators, should make physiological evidence and insight a foundation of their regulatory approval process for AI tools as medical devices.
Similarly, the NHS and the National Institute for Health and Care Excellence (NICE) should update their assessment mechanisms for digital health technologies to include coverage of physiological evidence.
Through their centralised coordination function, the AI and Digital Regulations Service should ensure all collaborating regulatory organisations include physiological expertise and evidence in the assessment of relevant AI tools for healthcare.
Taken together, these recommendations provide a roadmap for developing “guardrails” when developing AI in health. This is required in order to achieve the aims of improved patient outcomes, higher trust in innovative AI technologies and a more efficient healthcare system. By placing physiology at the heart of AI’s adoption into healthcare we can truly unlock the potential of this fast-evolving technology to support us all to live healthier for longer.
Find out more about The Physiological Society’s work on AI and health at physoc.org/AIPhysiology