In Australia, there exists a palpable skepticism toward news articles crafted by generative artificial intelligence (AI), despite the increasing integration of this emerging technology within media organizations. Initially capturing public attention around late 2022, generative AI gained prominence with the introduction of innovations such as OpenAI’s ChatGPT and the second iteration of its text-to-image generator DALL-E.
A recent federal government Television and Media Survey, conducted among approximately 5000 Australians in late 2023, revealed that while 61 percent of respondents were familiar with generative AI, nearly four in five expressed a negative inclination toward news content entirely produced by such technology. Even when AI was involved in the writing process, more than half of these respondents indicated a decrease in their trust towards the news.
The underlying concerns driving this skepticism encompass various factors, primarily rooted in apprehensions regarding the data sources utilized by AI models. Respondents voiced unease over the potential incorporation of unverified or untrustworthy data, alongside anxieties regarding the integrity and ethical considerations inherent in AI-generated news. Moreover, there’s a prevalent belief among Australians that human oversight brings a vital layer of accountability and ethical judgment to news production.
Furthermore, the survey underscored a widespread consensus—95 percent of respondents believed—that individuals should be informed about the extent to which news content is generated by AI, highlighting a growing demand for transparency in media production processes.
Instances like Channel Nine’s apology to a Victorian MP in January, following the publication of a digitally altered image produced by Adobe Photoshop’s AI-driven “generative expand” tool, further underscored the challenges and ethical considerations surrounding AI integration in media. Such incidents raise pertinent questions about the potential misuse or unintended consequences of AI technologies in journalistic practices.
While media conglomerates like News Corp are exploring avenues to leverage AI for content optimization and revenue generation, concerns persist regarding the technology’s impact on journalistic integrity and societal well-being. News Corp CEO Robert Thomson has cautioned against the potential erosion of journalistic standards and societal harm resulting from unchecked AI implementation.
In response to these apprehensions, News Corp Australia has introduced a clause in its code of conduct, stipulating that AI-generated content must undergo editorial review and approval before publication on any platform, including social media. This proactive measure aims to uphold editorial standards and mitigate potential risks associated with AI-generated content dissemination.
The Australian government has also taken steps to address the burgeoning influence of AI in media, appointing a panel of 12 experts to examine risks and develop regulatory frameworks. However, as of yet, concrete regulations surrounding AI implementation in journalism have not been formalized.
Communications Minister Michelle Rowland emphasized the importance of evidence-based policymaking, noting that the Television and Media Survey provides valuable insights into public engagement with media services and content. The government’s ongoing efforts to monitor and respond to evolving media landscapes reflect a commitment to safeguarding journalistic integrity and fostering public trust in media institutions.
As Australia navigates the intersection of technology and journalism, stakeholders grapple with the imperative to balance innovation with ethical considerations, ensuring that the media landscape remains a reliable source of information and accountability in society.






