Tuesday, May 14, 2024
HomeGlobalScience & TechnologyHow Human Error Affects in Machine Learning?

How Human Error Affects in Machine Learning?

-

Science & Technology, UK (Commonwealth Union) – Scientists are forming a way of bringing in a significant human characteristic which is uncertainty – in machine learning systems.

The comprehension of human error and uncertainty remains a challenge for numerous artificial intelligence systems, particularly in scenarios where human input informs the learning process of a machine. Many of these systems are designed under the assumption that human inputs are consistently definitive and accurate. However, the reality of human decision-making encompasses occasional errors and uncertainties.

A collaborative effort involving researchers from the University of Cambridge, The Alan Turing Institute, Princeton, and Google DeepMind aims to bridge the gap between human behavior and machine learning. Their goal is to enable AI applications to better account for uncertainty, especially in situations where humans and machines collaborate. This endeavor holds the potential to enhance the reliability and trustworthiness of such applications, particularly in critical domains like medical diagnosis where safety is paramount.

To achieve this, the team undertook a project that involved modifying a well-established image classification dataset. Human participants were given the ability to provide feedback while also indicating the level of uncertainty associated with their labeling of a specific image. The study revealed that training AI systems with uncertain labels can enhance their performance in handling uncertain input. However, researchers noted that the inclusion of human input can also lead to a decline in the overall performance of these hybrid systems. The findings of this research will be presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES 2023) in Montréal.

“Uncertainty is central in how humans reason about the world but many AI models fail to take this into account,” explained 1st author Katherine Collins from the University of Cambridge, Department of Engineering. “A lot of developers are working to address model uncertainty, but less work has been done on addressing uncertainty from the person’s point of view.”

Researchers indicated that they consistently make decisions by weighing probabilities, often without conscious deliberation. Frequently, such as when they mistakenly greet a person resembling a friend but who is actually a stranger, there are negligible consequences to inaccuracies. Nevertheless, in specific contexts, embracing uncertainty can entail significant safety hazards.

Collins remarked, Numerous human-AI interfaces presume that individuals are perpetually certain about their choices, contrary to actual human behavior where errors occur. Their focus was to examine the implications of individuals conveying uncertainty, a matter of particular significance in safety-sensitive scenarios, such as a medical professional collaborating with an AI-powered medical system.

Matthew Barker, co-author of the study and recent graduate of MEng from Gonville & Caius College, Cambridge indicated that they require improved tools to recalibrate these models, granting individuals collaborating with them the ability to express their uncertainties.  He further indicated that while machines can be trained with unwavering assurance, humans frequently cannot match this level of certainty, posing challenges for machine learning models in handling such uncertainty.

In their investigation, the researchers utilized several standard machine learning datasets. One dataset involved digit classification, another centered around categorizing chest X-rays, and a third revolved around classifying bird images. In the case of the first two datasets, the researchers simulated uncertainty. However, for the bird dataset, human participants were tasked with indicating their level of certainty regarding the attributes of the images they observed, like whether a bird appeared red or orange. These human-provided ‘soft labels’ enabled the researchers to gauge the impact on the final output. Nevertheless, the researchers observed a significant decline in performance when human judgment replaced machine input.

The outcomes of the study have illuminated a number of unresolved hurdles associated with integrating humans into machine learning frameworks for the researchers. They are making their datasets publicly available to encourage additional investigations and the integration of uncertainty into machine learning systems.

spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST POSTS

Follow us

51,000FansLike
50FollowersFollow
428SubscribersSubscribe
spot_img