AI Ethics in Practice at NHS England
Defining ethical AI development best practice for data practitioners in the NHS
Warning
This project is currently in development, and as such the following is subject to change.
Background
AI presents transformative opportunities in healthcare, but the technology brings with it risks to people, the natural environment and society at large. NHS England has a responsibility to effectively harness AI to support the provision of better care to all patients.
To do this, data scientists (among others) need to ensure ethical considerations are embedded in the development, evaluation and implementation of AI and data science projects more generally.
However, there are few practical examples of what ethical AI in health looks like for those involved in its design and implementation. This is despite the proliferation of guidance from research institutions and other organisations.
Aim
We have written a White Paper in which we identify the characteristics of ethical AI that data scientists should be concerned with at the NHS. This takes into account other actors in AI development at the NHS, including cybersecurity, assurance and governance colleagues.
This paper proposes that data scientists specifically focus on trying to ensure four characteristics of AI: fair, transparent, value-adding and reliable.
Our practical suggestions of how we can work towards embedding these characteristics in working practices include:
- Trialling tools and frameworks on live and emerging projects and to share learnings with the wider community. This will eventually constitute a portfolio of real examples that can inform future projects, similar to the use cases features in the OECD's Catalogue of Tools & Metrics for Trustworthy AI.
- Coordinating a series of interactive workshops to build awareness of ethical risks, and to create and sustain a shared vocabulary to document and help mitigate these risks effectively along each project’s lifecycle.
- Developing standardised resources that can be flexible to different types of data science projects, but ensure a minimum level of consideration and proportionate action.
- Mapping the development processes and involved actors of AI at the NHS and growing a platform to share knowledge and experiences. This will help us to identify how to incorporate ethical considerations into the data science lifecycle.
Outputs
We have:
- Developed a Model Card Template.
- Coordinated the publication of a record for the Automatic Moderation of Ratings & Reviews project on the government's Algorithmic Transparency Recording Standard.
- Written a (currently internal) White Paper defining the scope of operationalising AI Ethics in NHS England.
In progress
We are currently exploring:
- Using the Data Hazards project to communicate potential harms of our work.
- Developing a generic statement detailing how projects have taken ethical considerations into account.
- Supporting our information governance teams on whether additional instruments (such as model cards) can help inform a Data Protection Impact Assessment for AI use cases.