Warning for parents as government vows to restrict 'nudify' apps used to create child abuse material (2 Sep 2025)
Article summary: The Australian government has pledged to restrict so‑called “nudify” apps—AI tools that generate non‑consensual explicit images, often involving children—a move triggered by rising concerns about child exploitation and the spread of AI‑generated abuse. These tools are easily downloadable and enable users to produce harmful images within seconds, escalating the threat against child safety.
Key advocates, such as former Australian of the Year Grace Tame, have long warned that the legal system isn’t keeping pace with technological advances. She emphasises the urgency of criminalising the possession and use of AI tools designed to create explicit material involving minors. In response, the government is advancing reforms to hold platforms responsible and introduce stronger regulatory measures.
Aerial view of reports indicates that AI-generated child abuse material is soaring—law enforcement and child protection experts report that criminals are exploiting AI to produce and disseminate abuse content with alarming ease and anonymity. Pressure is mounting to impose a "duty of care" on platforms and app developers to proactively detect, remove, and prevent access to these exploitative tools.
The proposed legislation follows earlier steps by Independent MP Kate Chaney (Curtin), who introduced a private member’s bill in July 2025 specifically criminalising AI tools built to generate child sexual abuse material. Her bill calls for new offences, including the use of carriage services (like the internet) to facilitate these technologies and penalties of up to 15 years in prison. Importantly, it also allows for lawful public interest exceptions—for example, for law enforcement or intelligence agencies authorised to investigate such crimes.
Child safety experts endorse Chaney’s initiative, calling it a targeted, urgent plug in a legislative gap. They stress that AI‑generated abuse material, though synthetic, exploits real victims by using their images or identities for training these AI models—a harm that's direct and profound.
Despite broad agreement on the need for swift action, there is recognition that regulating AI is complex due to the technology's evolving nature. Officials, including the Attorney‑General, commit to exploring regulatory options that balance harms and public benefit, while preserving trust in AI systems.