MBRCGI Websites
|
Ibtekr.org
|
MBRCGI.gov.ae
|
UAE Innovates
|
Edge of Government
|
Pitch@Gov

More results...

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

UK tests AI systems to reduce bias in healthcare sector

10 minute read
The experiment uses computational impact assessments to analyze AI decisions that may lead to substandard health outcomes based on patients' backgrounds and profiles. These assessments are questionnaires that review an enterprise's operations, data and system design. To help assess the implications and risks that may accompany the adoption of an automated decision-making system.
Share this content

To complement the government’s distinguished efforts to adopt artificial intelligence systems, The UK’s National Health Service has launched a programme designed to monitor smart systems biases in the health sector and create comprehensive and diverse datasets with the participation of patients and specialist caregivers.

Despite the achievements of the equality appeals, Many are still vulnerable to the phenomenon of “implicit bias”, They are stereotypes about others that unconsciously influence our understanding and actions. While human practices have generated many biases over time, It remains frowned upon by a machine or intelligent software.

In the UK, Artificial intelligence systems are widely used in vital fields and public services, These systems use algorithms to evaluate data collected on Earth and create representations of it, Hence building conclusions on them.

In the health sector, for example, These systems perform many management and diagnostic tasks, This supports the efforts of health workers and makes treatment faster and more effective. However, these systems could backfire if their developers do not take into account the biases in algorithms that have caused inequalities over the years. It has been proven that the prediction rules for heart disease, which have been used by doctors of industrialized countries for decades, It was characterized by bias due to the studies on which these rules were initially based. Of course, When a particular ethnic group makes up most of the cases whose data are collected, It is more likely that the result will apply to them than other underrepresented or minority groups. Bias may appear at any other stage of the algorithm creation process, such as model selection or application of results.

Because the incremental effect requires progressive treatment, The NHS is working with the independent research institute Ada Love Liss to launch a world-leading approach to improving the ethical adoption of AI in the health system.

These efforts complement the work of the ethics team in the Artificial Intelligence Laboratory of the Health Services Authority, Which focuses on supporting researchers and developers in creating diverse and comprehensive datasets to train and test AI systems, Data science teams should include specialists from diverse backgrounds.

The experiment uses computational impact assessments to analyze AI decisions that may lead to substandard health outcomes based on patients’ backgrounds and profiles. These assessments are questionnaires that review an enterprise’s operations, data and system design. To help assess the implications and risks that may accompany the adoption of an automated decision-making system.

The Ada Love Liss Institute has published research papers exploring the effects of artificial intelligence on society and the environment. It provides a detailed step-by-step process for employing computational impact assessments on the ground. In light of them, These assessments will be tested on a range of lab initiatives, It will be part of the process of accessing the proposed national medical imaging platform and the national COVID-19 chest imagery database, Which helps researchers to understand the virus more and develop mechanisms for the care of critically ill patients. This database included more than 60,000 radiographs from 27 trusted health institutions. This made it a convenient model to verify the performance of the system and monitor any defects or bias. During the pandemic, With the growing interest in using smart systems to improve services and enable home care, The team tested the accuracy of the algorithms in diagnosing cases of infection and their level of performance with patients of different ages, genders and ethnicities.

Because of the flexibility of the early stages of development, The Panel has sought to make the most of this, He engaged patients and specialists and collected their feedback and insights to make the necessary adjustments.

It should be noted that the only existing trial of the application of computational impact assessments was carried out in Canada, Approved by the Treasury Board to manage government procurement standards for artificial intelligence systems, A multi-sectional questionnaire asks 60 questions about the technical features of the systems, the data they adopt and how they make decisions. Then Effects are categorized on a graded scale from “little to no effect” to “deep impact”. These impacts are determined by considerations such as individual rights, health and well-being, economic interests, and ecosystem. Finally, Reviews are displayed on a special website.

The Health Services Authority has issued a report reviewing the concerns faced by this technology. It explains that the aim is not to replace but to strengthen existing accountability tools and regulatory frameworks. It aims to provide a normative framework for assessing the impacts of AI on people and societies.

In the same vein, The Authority assured patients that their data will be used responsibly and safely, In a way that reflects directly on them and serves the public interest.

FANR did not deny that the computational impact assessment technology is coming to many challenges before it is ready to be applied in other areas to provide real value. This requires adjusting them to suit the current landscape.

The new process will ensure that algorithm biases are prevented before they reach NHS data. This means preventing the potential risks of this, The pilot program will also support researchers and developers with accurate information of what is happening on the ground to guide their efforts.

When patients and their families are part of the development effort, It means delivering a better experience and achieving true integration of AI, This will deliver better health outcomes for all, Especially for minorities and high-risk groups.

By exploring the legal, social and ethical implications of the proposed smart systems, DEWA is looking forward to a system that enhances transparency, accountability and legitimacy of the use of artificial intelligence in healthcare.

References:

Subscribe to Ibtekr to stay updated on the latest government initiatives, courses, tools and innovations
Register Now
Subscribe to Ibtekr’s Newsletter
Innovators’ Mailing List
Our newsletter reaches more than 30,000 innovators from around the world! Stay up to date with innovations from across fields of practice in the public sector.
Subscription Form (en)
More from Ibtekr

The Deviation Game: A Japanese Innovation Rekindling Human Creativity in the Age of Algorithms

In a world increasingly shaped by artificial intelligence, a deceptively simple game emerging from Japan, “Deviation Game,” is making a bold statement: human creativity still holds a unique power that no machine can replicate. By nudging players to think beyond conventional cues and craft ideas that defy algorithmic imitation, the project highlights not just the current limitations of AI, but the irreplaceable essence of human expression.

 · · 19 January 2026

The New DGP: A Data Tool that Aims to Track Inequality in Real-time

Experts are gaining a direct view of the economy’s core dynamics, uncovering not only the scale of growth but also who truly benefits from it. This American initiative provides policymakers with powerful tools to track and respond to disparities in real time, introducing a groundbreaking innovation that redefines how the economic landscape is understood.

 · · 19 January 2026

Redesigning Comfort: The Navy’s Quest for Better Uniforms for Female Sailors

In the US Navy, one of the most demanding work environments in terms of precision and efficiency, the administration observed female personnel feeling restricted by the standard uniform. It embarked on an ambitious mission to redesign it using an innovative, data-driven, and inclusive approach, ensuring it met their needs and the nature of their duties while remaining a symbol of professionalism, identity, and unity.

 · · 19 January 2026

How Technology is Helping Track Graffiti to Fight Hate Crimes in Canada

In the era of artistic freedom, a creative product can either inspire peace or embody intolerance. In Canada, a nation that prides itself on its diversity, the spread of hate cannot be permitted, even within an artistic framework. Consequently, the city of Edmonton launched the "Lighthouse" initiative, harnessing the power of technology to monitor and mitigate expressions of hate.

 · · 2 January 2026

Voices of Poverty: A Narrative Approach to Human Development in India

The Indian Poverty and Human Development Monitoring Agency (PHDMA) has reimagined its data collection methods to cultivate a genuine understanding of the lives of the people behind the statistics. By gathering narratives and visual evidence, analyzing data, and unifying stakeholders, the agency is establishing a novel approach to human development.

 · · 2 January 2026
1 2 3 91
crossmenuchevron-down
This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.