Healthcare (Commonwealth Union) – AI (artificial intelligence) continues to show promise in many different areas and in the world of healthcare it could save lives like never before.
An AI system capable of examining irregularities in the shape and structure of blood cells — and doing so with greater precision and consistency than human specialists — may transform how illnesses like leukaemia are identified, according to research co-led by a University College London (UCL) academic.
Scientists have developed a platform named CytoDiffusion, which applies generative AI, the same technology used to power image-creation tools such as DALL-E, to analyse the form and characteristics of blood cells.
Unlike many existing AI models that are trained mainly to recognise visual patterns, CytoDiffusion — created by a team jointly led by Professor Parashkev Nachev (UCL Queen Square Institute of Neurology), along with collaborators from UCL, the University of Cambridge and Queen Mary University of London — can reliably distinguish a broad spectrum of healthy blood cell features and detect uncommon or atypical cells that could signal disease. The findings appear in Nature Machine Intelligence.
Identifying small variations in blood cell size, shape and visual traits is central to diagnosing numerous blood conditions. However, this work demands extensive training, and even experienced clinicians can disagree when confronted with challenging samples.
The first author, of the study, Simon Deltadahl of the University of Cambridge, indicated that they have all got many different kinds of blood cells containing various properties and different roles inside our body. White blood cells specialise in fighting infection, for example. He pointed out that having knowledge of what an unusual or diseased blood cell appears under a microscope is a vital part of diagnosing many diseases.
The co-senior author Dr Suthesh Sivapalaratnam of the Queen Mary University of London pointed out that the clinical challenge he faced as a junior haematology doctor was that when a day of work was over, he would have a lot of blood films needing to be evaluated. As he was evaluating them during the late hours, he became certain that AI would perform a better job than he could.
Researchers built CytoDiffusion by training it on more than 500,000 blood-smear images from Addenbrooke’s Hospital in Cambridge. This collection — the largest available — included typical blood cell varieties, unusual cases, and features that often trip up automated tools.
Instead of simply learning to sort cells into preset categories, the model was taught to understand the full range of how cells can look. This approach made the system more resilient to variations between labs, equipment, and staining techniques, and improved its ability to spot uncommon or abnormal cells.
In evaluations, CytoDiffusion identified irregular cells associated with leukaemia with much higher sensitivity than current technologies. It also equalled or outperformed top existing models, even when trained with far fewer examples, and could assess how confident it was in its own predictions.
Deltadahl indicated that when they measured accuracy, the system performed slightly better than human experts, however, what really made it stand out was its ability to recognise when it was not sure. Deltadahl further indicated that their model would never claim certainty and still be wrong — something humans can occasionally do.
The co-senior author Professor Michael Roberts of the University of Cambridge explained “We evaluated our method against many of the challenges seen in real-world AI, such as never-before-seen images, images captured by different machines and the degree of uncertainty in the labels. This framework gives a multi-faceted view of model performance which we believe will be beneficial to researchers.”
The researchers further demonstrated that CytoDiffusion could produce synthetic blood cell images that cannot be distinguished from real ones. In a ‘Turing test’ that had ten experienced haematologists, the human experts had no improved performance at marking the real from AI-generated images.






