Friday, May 3, 2024
HomeGlobalScience & TechnologyLarge language models may provide inaccurate information

Large language models may provide inaccurate information

-

Science & Technology, UK (Commonwealth Union) – The evolution of Large Language Models (LLMs) pose from knowledge bases to zero-shot translators marks a significant stride in leveraging these powerful language models for information generation.

However, a recent paper authored by leading Artificial Intelligence researchers at the Oxford Internet Institute asserts that LLMs a direct threat to scientific integrity due to the potential generation of ‘hallucinations’ or untruthful responses. Published in Nature Human Behaviour, the work by Professors Brent Mittelstadt, Chris Russell, and Sandra Wachter argues for restrictions on LLMs to safeguard the veracity of scientific knowledge.

The paper highlights that LLMs are designed to generate helpful and persuasive responses without inherent guarantees regarding their accuracy or alignment with factual information. A significant factor contributing to this issue is the use of data from potentially inaccurate sources in training these models. LLMs typically learn from large datasets extracted from online content, which may include false statements, opinions, and creative writing, among other forms of non-factual information.

Professor Mittelstadt elucidates that users often anthropomorphize LLMs, viewing them as human-like information sources, which can lead to unwarranted trust in the accuracy of their responses. This tendency is exacerbated by the design of LLMs as friendly, human-sounding agents capable of confidently addressing a wide range of questions with well-articulated text. Consequently, users may be easily persuaded that responses are accurate even when lacking factual basis or presenting a biased version of the truth.

In order to safeguard the fields of science and education against the dissemination of inaccurate and biased information, the authors advocate for establishing clear expectations regarding the responsible and constructive contributions of LLMs. The paper recommends that, especially in tasks where accuracy is crucial, users should formulate translation prompts that incorporate verified and factual information.

Professor Wachter explained, “The way in which LLMs are used matters. In the scientific community it is vital that we have confidence in factual information, so it is important to use LLMs responsibly. If LLMs are used to generate and disseminate scientific articles, serious harms could result.”

Professor Russell says, “It’s important to take a step back from the opportunities LLMs offer and consider whether we want to give those opportunities to a technology, just because we can.”

Language Models (LLMs) are presently employed as repositories of knowledge, tasked with generating information in response to queries. However, this approach exposes users to the risks of encountering recycled false information from the training data and even ‘hallucinations’—erroneous information spontaneously generated by the LLM that was absent in the training data.

In response to these challenges, the authors advocate for a paradigm shift in utilizing LLMs, proposing that they be employed as ‘zero-shot translators.’ This approach involves users providing the LLM with relevant information and instructing it to transform the input into a desired output, such as rephrasing bullet points into a conclusion or generating code to convert scientific data into a graph.

Researchers of the study pointed out that by adopting LLMs in this capacity, it becomes more straightforward to verify the factual accuracy and consistency of the output with the provided input. The authors acknowledge the potential contribution of this technology to scientific workflows but emphasize the critical importance of subjecting the outputs to scrutiny to safeguard the integrity of robust scientific practices.

Lead author and Director of Research, Associate Professor, and Senior Research Fellow, Dr. Brent Mittelstadt from the Oxford Internet Institute, along with co-authors Professor Sandra Wachter and Dieter Schwarz, Associate Professor in AI, Government & Policy, and Research Associate Chris Russell, underscores the imperative need to utilize LLMs as zero-shot translators to protect the integrity of scientific endeavors.

spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST POSTS

Follow us

51,000FansLike
50FollowersFollow
428SubscribersSubscribe
spot_img