The global push for AI in public services is gaining pace, and governments around the world are searching for ways to digitise and democratise access to essential needs. In October 2024, Kenya made a significant move by introducing the Social Health Authority (SHA), an ambitious healthcare system designed to replace its long-standing national insurance scheme. The goal was noble, touted as a digital transformation milestone: to extend affordable care to the country’s massive informal economy, ensuring 83% of the workforce—day labourers, farmers, and non-salaried workers—is no longer left behind.
However, months after its launch, the initiative has emerged as a global case study in the perils of algorithmic bias. Central to the controversy is a predictive machine learning model used to decide how much individual Kenyans should pay for their healthcare premiums. Rather than relying on traditional income tax records, which are scarce in an informal economy, the system uses proxy means testing (PMT).
PMT is an old concept, often advocated by international financial institutions like the World Bank. It attempts to estimate the household’s wealth from the observed assets and living conditions. In Kenya, government volunteers went out to collect data points: What is the roof made of? Is there a radio? What kind of sanitation is used? An algorithm processes this data to calculate the household’s financial capacity. But the digital audit of this system uncovered a critical flaw: systematically overcharging the poorest citizens and underestimating the wealth of the rich. The algorithm relies on difficult data points and cannot grasp the fluid realities of the economy. For example, a struggling farmer living in a home with an iron-sheet roof and an electricity connection might be labelled “middle-income” by an algorithm, resulting in a health insurance premium that consumes 10%–20% of their meagre income.
The human cost of this data bias is high. In both Nairobi and rural areas, vulnerable individuals promised free or affordable care are receiving bills they cannot afford. So many are opting out of the system altogether – it’s a choice between feeding your family or paying for a broken digital healthcare premium. According to reports, only a small percentage of the 20 million registered people are actually paying, resulting in huge funding shortfalls for hospitals and critically ill patients without care. Kenya provides a global example of an important trade-off that policymakers face when they use predictive algorithms. Authorities, aware of the system’s known inaccuracies, reportedly opted to focus on correctly taxing the rich to prevent revenue loss, implicitly accepting the collateral cost of overcharging the poor. That decision highlights a key problem: Opaque means-testing algorithms undermine public trust and are more of a lottery than a social safety net.
Kenya’s experience is not an isolated case. Similar PMT systems used elsewhere in parts of Asia, Latin America, and other African nations have often been characterised by high error rates, excluding over 80% of their target populations. The lesson here is much broader than in Africa. For the global tech community and international policymakers, it’s a clear signal that technology cannot solve socio-economic problems if the underlying data models are inflexible and biased. Digital transformation in healthcare holds a lot of potential. But for governments and tech developers, “leaving no one behind” means prioritising transparent, equitable algorithms over opaque efficiency. Until artificial intelligence models can accurately capture the intricate, lived experiences of the informal workforce, the promise of universal digital healthcare will remain unattainable for those who require it the most.


