An Australian Senate inquiry has raised concerns about the transparency of tech giants Amazon, Google, and Meta in their use of Australian data for artificial intelligence (AI) development. The inquiry’s final report, published this week, criticized the companies for their reluctance to disclose how personal and private data was utilized in training AI models, prompting calls for robust regulatory measures to safeguard Australians’ rights.
Urgent need for dedicated AI legislation
The report, led by Labour Senator Tony Sheldon, underscored the necessity of standalone legislation to regulate AI technologies. Senator Sheldon described the practices of multinational technology corporations as akin to “piracy,” accusing them of exploiting Australian culture, data, and creative assets for profit without equitable returns.
One of the key recommendations was to classify general-purpose AI models—such as OpenAI’s GPT, Meta’s Llama, and Google’s Gemini—as “high risk.” This categorization would mandate transparency and accountability requirements for companies developing such technologies. The inquiry concluded that existing laws are inadequate to address the challenges posed by the rapid advancements in AI.
The evasive responses from Amazon and Google, who declined to clarify how they used data collected from services like Alexa, Kindle, and Google’s platforms to train their AI systems, drew particular attention. Despite admitting to scraping data from Australian Facebook and Instagram users since 2007, Meta was unable to explain how users could have consented to applications not envisioned at the time of data collection.
Protecting creative workers from AI disruption.
The inquiry underscored the significant risks AI presents to the creative industries, emphasizing the need for mechanisms that guarantee equitable compensation for creators whose work AI systems use. Recommendations included requiring developers to disclose copyrighted materials in their datasets and adhere to proper licensing and payment procedures.
The report emphasized that creative professionals are among the groups most vulnerable to AI’s disruptive potential. Without adequate safeguards, the rapid evolution of AI could undermine their livelihoods, the report warned.
Organizations representing the creative sector, such as APRA-AMCOS and the Media Entertainment and Arts Alliance, welcomed the report’s findings. They expressed support for establishing an AI Act, which would introduce clear measures to protect intellectual property and ensure fair treatment for creatives.
Divergent views on AI regulation
The inquiry revealed differing opinions on how to regulate AI, despite the strong recommendations. Coalition senators Linda Reynolds and James McGrath expressed reservations about imposing stringent regulations, warning that overregulation could impede innovation and limit potential job creation. They viewed AI’s risks to cybersecurity and national security as more pressing than its impact on the creative economy.
In contrast, the Greens argued that the report’s recommendations did not go far enough. They warned that Australia risks falling behind jurisdictions such as the UK, Europe, and California, which are already advancing comprehensive AI regulatory frameworks.
Balancing innovation and oversight
The contrasting perspectives reflect the complexity of regulating AI in a manner that addresses its risks while fostering its opportunities. As AI continues to reshape industries and societies, the need for thoughtful, balanced regulation becomes increasingly urgent.
This Senate inquiry represents a significant step in shaping Australia’s policy response to AI, laying the groundwork for legislation that aims to balance innovation with the protection of individual rights and creative industries. With the global influence of AI expanding rapidly, the findings of this report could play a crucial role in determining Australia’s position in the evolving technological landscape.