Monday, May 6, 2024

Woke AI stirs controversy

-

In recent days, News Corp has found itself embroiled in a peculiar confrontation with the specter of what they have deemed “woke AI.” The Australian, a flagship publication under the News Corp umbrella, ran an exclusive report alleging left-wing bias within Meta’s latest large language model (LLM), Llama 3. The crux of the controversy revolves around Llama 3’s ranking of Australia’s greatest politicians, which notably placed figures like Gough Whitlam and Malcolm Turnbull in prominent positions while seemingly omitting stalwarts like John Howard and Robert Menzies.

The article raised eyebrows by highlighting Peter Dutton’s placement as the “least humane” politician, a designation that some critics argue mirrors his public image. However, the uproar extended beyond mere disagreement with Llama 3’s assessments, with figures like shadow communications minister David Coleman, conservative stalwart Michael Kroger, and former communications minister Richard Alston expressing outrage. Alston, notably, is promoting a new book critiquing societal elites, a seemingly tangential addition to the discourse.

Sky News and various News Corp tabloids echoed the Australian’s sentiments, amplifying the narrative of biased AI shaping discourse. However, the assertion that Meta hastily adjusted Llama 3’s rankings in response to The Australian’s scrutiny lacks grounding in how LLMs operate.

Contrary to the notion that LLMs harbor beliefs or opinions, they function as predictive text engines trained on vast datasets from the internet. Dr. Jenny L. Davis of the Australian National University emphasizes that LLMs reflect societal biases and structural inequities inherent in their training data. This perspective underscores the inherent conservatism of LLMs, not necessarily politically, but in their reliance on existing information with a lag of several years.

The evolution of AI models has yielded diverse outcomes, from Meta AI’s refusal to generate images of interracial couples to Google AI’s depiction of Nazis as people of color. These discrepancies underscore the complexity of data sources and the challenges in mitigating bias within AI systems. While Llama 3 draws from platforms like Wikipedia, the origins of its training data remain opaque, with speculation surrounding sources like Reddit and Stack Overflow.

Research from institutions like East Anglia University and Carnegie Mellon University has shed light on potential biases within AI models. A 2023 study identified a “liberal” bias in ChatGPT, with CMU research indicating a previous version of Llama leaned slightly more authoritarian and right-wing. Notably, efforts to curb hate speech during AI training may inadvertently steer models toward more liberal stances on social issues.

The revelation that News Corp itself relies on AI to generate content underscores the ubiquity of automated systems in media production. Despite News Corp’s scrutiny of AI “wokery,” the irony of their own reliance on AI-generated content—reportedly spanning thousands of articles across their mastheads—cannot be overlooked.

In summary, the clash between News Corp and Llama 3 serves as a microcosm of broader debates surrounding AI bias and its implications for societal discourse. As AI continues to permeate various facets of human interaction, addressing bias and fostering transparency in AI development remain paramount challenges for technologists and society at large.

spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST POSTS

Follow us

51,000FansLike
50FollowersFollow
428SubscribersSubscribe
spot_img