In recent months, generative AI programs have surged in popularity, funded and developed by tech giants like Microsoft and Google. However, beneath the surface lies a darker concern: the potential for these tools to automate child grooming by predators¹. As the Australian government grapples with regulating this fast-growing technology, questions arise about its impact on online safety.
The Rise of AI Chatbots and Image Generators
ChatGPT, Bard, Dall-E, and Midjourney—these AI chatbots and image generators have captured public attention. But their rapid adoption raises alarms. Could they replace human employees? Might they be misused for misinformation, child exploitation, or scams? The eSafety commissioner, Julie Inman Grant, warns of “sinister new avenues for manipulation” as chat bots could be created to contact young people.
Ethical Challenges and Regulatory Steps
Governments worldwide face a daunting task. Australia, among the first to adopt national AI ethics principles, grapples with gaps in governance and capability. Ed Husic, the minister for science and technology, emphasizes that AI is not unregulated. Existing copyright and privacy laws govern data collection and training of AI programs. Yet, more regulation remains essential¹.
The Road Ahead
As AI continues to evolve, the delicate balance between innovation and safety remains. The eSafety commissioner’s proposals—detecting and removing child abuse material, disrupting harmful content—highlight the urgency. But the challenge lies in staying ahead of the curve, ensuring AI serves humanity without endangering it.