AI regulation developments in Australia

- Advertisement -

Australia has yet to enact specific laws or regulations directly governing artificial intelligence (AI). To date, the nation’s approach has been largely voluntary, exemplified by the AI Ethics Principles published in 2019. These principles encompass eight voluntary guidelines for the responsible design, development, and implementation of AI, aligning with the OECD’s Principles on AI.

In June 2023, the Commonwealth Department of Industry, Science and Resources initiated a consultation on “Safe and Responsible AI in Australia.” This consultation aimed to develop governance mechanisms to ensure the safe and responsible development and use of AI and to identify potential regulatory gaps.

On January 17, 2024, the Australian Government released its interim response to this consultation. The response highlighted that existing regulatory frameworks might not adequately prevent harms from AI systems in high-risk contexts. Consequently, significant regulatory reforms are anticipated, focusing on a risk-based framework that includes mandatory safeguards and their implementation.

Following this interim response, the government established an Artificial Intelligence Expert Group. This group will assist the Department in formulating regulations on transparency, testing, and accountability, with particular attention to mandatory AI safeguards in high-risk settings.

Current Legal Framework Impacting AI

While no specific statutes or regulations directly regulate AI in Australia, several laws indirectly affect its development and use. Key among these are:

  • The Online Safety Act 2021: This law addresses online safety issues, including AI-generated content.
  • Australian Consumer Law: Applied in a Federal Court case where Trivago was fined $44.7 million for misleading consumers via its algorithmic recommendations.
  • Privacy Act 1988: Governs data protection and privacy.
  • Corporations Act 2001: Influences corporate practices, including those involving AI.
  • Intellectual Property Laws: Affect various aspects of AI development and use.
  • Anti-discrimination Laws: Address discriminatory outcomes from AI-driven processes.

The interim response also indicated that existing laws might need strengthening to mitigate AI-related harms. In line with this, the government is developing new laws to grant the Australian Communications and Media Authority (ACMA) regulatory powers to combat online misinformation and disinformation, including AI-generated content.

Definitions and Scope of AI

Australia has not formally adopted any statutory definitions of AI. However, the Department’s consultation provided the following definitions:

  • AI: An engineered system that generates predictive outputs (e.g., content, forecasts, recommendations) for human-defined objectives without explicit programming.
  • Machine Learning: Patterns derived from training data using algorithms, applied to new data for prediction or decision-making.
  • Generative AI: Models generating novel content (e.g., text, images, audio) in response to prompts.

These definitions align with those from the International Organization for Standardization.

Sectoral and Territorial Scope

Currently, there is no specific guidance on the territorial or sectoral scope of AI regulation in Australia, as no direct regulations exist. Future AI-specific regulations are expected to be applicable across all sectors of the economy, rather than being sector-specific.

Compliance and Core Issues

The voluntary AI Ethics Principles aim to ensure AI is safe, secure, and reliable. These principles advocate for:

  • Human, Societal, and Environmental Wellbeing: AI should benefit individuals, society, and the environment.
  • Human-Centered Values: Respect for human rights, diversity, and individual autonomy.
  • Fairness: Inclusivity, accessibility, and non-discrimination.
  • Privacy Protection and Security: Upholding privacy rights and data security.
  • Reliability and Safety: Reliable operation of AI systems as intended.
  • Transparency and Explainability: Responsible disclosure and transparency in AI impacts.
  • Contestability: Processes for challenging AI outcomes when significantly impacted.
  • Accountability: Identifiable and accountable individuals for AI lifecycle phases, with enabled human oversight.

Regulatory and Enforcement Landscape

There is currently no AI-specific regulator in Australia. However, it is expected that sector-specific regulators like the Australian Competition and Consumer Commission, ACMA, the Office of the Australian Information Commissioner, and the e-Safety Commissioner will play roles in the regulation of AI. ACMA, in particular, is anticipated to receive new regulatory powers to address online misinformation generated by AI. Enforcement and penalties related to AI will remain tied to breaches of existing, non-AI-specific statutes and regulations until dedicated AI laws are enacted

Hot this week

From Backyard Cube to Cosmic Powerhouse: Australia’s SpIRIT Nanosatellite Shines

The Space Industry Responsive Intelligent Thermal (SpIRIT) nanosatellite has...

Canada Faces a Looming Cancer Crisis: Experts Warn of Alarming Surge

Commonwealth—A recently released Lancet report estimates cancer rates and...

Leading with Vision: The Global Journey of Sundar Pichai

It’s been a decade since Sundar Pichai took the...

St. Pio of Pietrelcina

The feast of St. Pio of Pietrelcina is celebrated...

How Airtel’s New FEED Strategy Aims to Empower a Generation Across Africa

Africa (Commonwealth Union) _ Airtel Africa Foundation, the charitable...
- Advertisement -

Related Articles

- Advertisement -sitaramatravels.comsitaramatravels.com

Popular Categories

Commonwealth Union
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.