AI Regulation in Australia: Current Landscape and Future Directions

In recent years, the progress made by developments in Artificial Intelligence (AI) has reshaped global industries and businesses. In response, governments are working to establish regulations that can manage this transformative technology. Australia, recognising the immense potential and associated risks, has been proactive in shaping its regulatory approach.

 

Overview of AI in Australia

What is AI?

AI is defined by the Commonwealth Department of Industry, Science and Resources as, “an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming. AI systems are designed to operate with varying levels of automation”. Essentially, AI refers to technologies that allow machines to mimic human intelligence and perform tasks that typically require human intervention.

 

The impact of AI on Australia

Artificial intelligence is being integrated into various sectors of the Australian economy, from healthcare to finance, industries want to capitalise on the productivity and efficacy benefits associated with AI implementation in their businesses. However, this rapid advancement also raises important questions about accountability, transparency, and broader societal impacts of these systems.

 

What are the current laws surrounding AI in Australia?

Currently, Australia does not have a comprehensive set of laws specifically designed for AI, but several existing legal frameworks apply to AI technologies, with ongoing effects to develop more AI-specific regulations. AI is Australia is therefore currently regulated through a combination of existing laws, guidelines, and ethical frameworks.

 

Australia’s AI Regulatory Framework

Here is an overview of key Australian laws and regulatory measures that currently govern AI in Australia.

1.       Privacy Act 1998 (Cth)

  • The Act regulates the collection, use and handling of personal data by AI systems, and is one of the most relevant legal frameworks for AI in Australia.

  • The Australian Privacy Principles which form part of the Privacy Act, apply to organisations using AI technologies that process personal data. AI systems that analyze, store or generate data must ensure compliance with these principles, especially around transparency, consent and data security.

  • A review of the Privacy Act is currently in progress to update it for AI and other emerging technologies. Proposed reforms include strengthening individuals’ control over their data and ensuring transparency in automated decision-making systems. 

2.       Competition & Consumer Act 2010 – Australian Consumer Law (ACL)

  • While there are no AI-specific laws within the ACL, the existing legal provision under the ACL applies to AI technologies in various ways.

  • For example, AI systems must not engage in misleading or deceptive conduct (Section 18), and products powered by AI must meet consumer guarantees such as being fit for purpose and of acceptable quality (Part 3–2). AI systems embedded in products are also subject to product liability rules, making businesses accountable for damages caused by AI-related malfunctions (Part 3-5).

  • The Australian Competition and Consumer Commission (ACCC) is the key regulator enforcing compliance with the ACL, AI technologies developed by businesses are subject to the ACCC’s decisions. As AI adoption increases, the ACCC is likely to play a more active role in ensuring AI systems comply with consumer protection standards under the ACL.

3.       Surveillance Legislation

1.       Surveillance Devices Act 2004

  • AI systems used for surveillance or monitoring purposes, such as facial recognition technology, are subject to surveillance legislation in Australia. This includes the Surveillance Devices Act 2004 (Cth), and any relevant state laws.

  • In Victoria’s case, Sections 6 – 11 of the Surveillance Devices Act 1999 (Vic) provides the same legislative regulations for AI use in surveillance. The Act prohibits the use of listening devices, optical devices (such as cameras), tracking devices, and data surveillance devices to monitor or record individuals without their consent, unless certain exceptions. AI-driven systems, such as facial recognition technology, fall under this category when they are used to capture or track individuals’ activities without proper authorization or consent.

 2.       Telecommunications (Interception and Access) Act 1979 (Cth)

  • The Act regulates the use of AI technologies in telecommunications surveillance, ensuring that interceptions of communications are conducted lawfully.

  • Section 7: Prohibits the interception of telecommunications unless it is conducted under the authority of a warrant issued to law enforcement or intelligence agencies. This would apply to AI technologies involved in intercepting phone calls, emails, or other forms of communication.

  • Section 108: Deals with the unauthorized access to stored communications, which may apply if AI technologies are used to retrieve or analyze stored communications (e.g., emails or text messages) without lawful authority.

  • If AI is used to intercept communications or monitor data, these laws at both the federal level (for telecommunications) and state level (for data surveillance) come into play. The Telecommunications (Interception and Access) Act 1979 (Cth) is specifically designed to regulate the interception of communications, including voice and electronic communications, ensuring such actions are lawful.

 

Future Direction of Australian AI Legislation & Regulation

The Australian Government has made significant progress in their discussions of the future of Australian AI Regulation.

June 2023: the Commonwealth Department of Industry, Science and Resources published a discussion paper, “Safe and responsible AI in Australia”, which proposes a risk-based regulatory approach focusing on high-risk AI systems, such as those affecting human rights and safety. It suggests mandatory guardrails for these systems, including testing, transparency, and accountability requirements, while allowing voluntary standards for lower-risk applications. The framework aims to align with international regulations, such as the EU AI Act, and is designed to be flexible and scalable to adapt to evolving AI technologies. Ongoing consultations with public and industry stakeholders will refine these proposal

January 2024: The government’s “2024 Interim Response” to the Safe and Responsible AI in Australia consultation highlights the government's plan to implement mandatory guardrails for high-risk AI applications, aiming to address concerns about safety, accountability, and transparency. Key actions include the development of voluntary AI Safety Standards in collaboration with industry, options for voluntary labelling and watermarking of AI-generated content, and the establishment of an expert advisory group to guide further developments. The response emphasizes a risk-based approach to AI regulation, focusing on high-risk areas without burdening low-risk applications, and aligning with international efforts in the EU, US, and Canada. Additionally, the government will strengthen existing laws related to privacy, competition, and online safety, while exploring new AI-specific legislation to ensure long-term regulatory clarity.

 

Conclusion

Australia is steadily advancing toward a comprehensive regulatory framework for AI that balances the need for innovation with the necessity of safeguarding public trust and safety. By adopting a flexible and scalable regulatory approach, informed by both domestic and international best practices, Australia is poised to lead in the responsible development and deployment of AI technologies. Through ongoing reforms to privacy, competition, and consumer protection laws, along with the introduction of AI-specific legislation, the government is laying the foundation for an AI-driven future that benefits society while mitigating risks.

Previous
Previous

What are Your Choices Instead of Going to Court?

Next
Next

Right to Disconnect After Working Hours