By Professor Kimberlee Weatherall
Governments around the world have high hopes for Artificial Intelligence (AI). AI and automation hold out the promise of more, better, faster, more efficient, more personalised and targeted government services for more Australian citizens and residents.
And yet, it is clear that there are real challenges of trust and confidence. Australians have been shown to have lower levels of trust in AI than their counterparts globally. And it is not clear that the confidence is much higher within government, or that most public servants feel well equipped to navigate these new, and fast-developing technologies.
Inspiring ambition there certainly is. In June 2024 Australia’s Data and Digital Ministers issued a National Framework for the assurance of artificial intelligence in government.
“Australian governments will adopt a lawful, ethical approach to AI that places the rights, wellbeing and interests of people first.”
Similarly, in the recently published Policy for the responsible use of AI in government, the Commonwealth government has recognised it has “an elevated level of responsibility for its use of AI and should be held to a higher standard of ethical behaviour.” In short, in the National Framework and elsewhere, Australian governments have committed, to being exemplars in the safe and responsible use of AI.
Dictionaries define “exemplar” to mean “an ideal model, or example”; one to be copied or imitated. Clearly, to be an exemplar, one must not only act consistently with the highest ethical standards, but be a visible model; and set an example others can learn from and follow.
To achieve the status of exemplar in safe and responsible AI use, there is more that Australian government departments, agencies, and public servants can do.
First, I’d love to see governments and public servants establish high standards for themselves for the use of AI and automation. The current Commonwealth Policy for the responsible use of AI in government is unfortunately underwhelming. Basic actions that would be considered no-brainers in the private sector – training for staff, or developing an internal understanding of where and how AI is being used in the organisation – are only ‘recommended’ under government Policy, rather than expected.
In fact, the Voluntary AI Safety Standard, developed by the National AI Centre and CSIRO, and published in August as a guide for all Australian organisations is a striking contrast. It sets out far higher, and more concrete standards than any of the Frameworks, Guides or Policies published by the Commonwealth government.
Second, if you want to be seen as setting an example, people must be able to observe, and learn from what you’re doing. There are some terrific examples out there of Australian government organisations being transparent about their use of AI and automation. Transport for NSW, for example, has published a detailed Automated Enforcement Strategy; NSW trials of NSWEduChat have also been well-publicised.
For the most part, however, it is hard for the public to find out much about how AI or automated systems are being used by Australian governments. I know, because our research team tried - systematically. The NSW Ombudsman asked ADM+S to undertake a project mapping the use of automated decision-making systems across NSW state and local governments. It was a challenging task: this information isn’t as public as it might be.
The resulting compendium of systems, we hope, will be a goldmine of information for both the public, and public servants. It can demonstrate the many benefits of automation in NSW, and perhaps inspire others.
Australians do expect government to visibly demonstrate high standards in the use of AI. Public trust depends on it. The steps Australian governments have taken, and are taking, to demonstrate a commitment to safe and responsible are notable, and headed in the right direction. I hope we will see more in the months to come.