Services Australia is wary of how external AI tools are consuming and referencing its information, with targeted adjustments being made to information management and presentation.

The activity is revealed in an automation and AI strategy for 2025-27 published by the agency, which emphasises trust and transparency around tool use.
While much of the strategy is high-level, Services Australia does outline specifics when it comes to AI tools ingesting or working with its public-facing information, such as its websites and digital channels.
As part of this approach, Services Australia wants to make it "easier for external AI to extract the correct information and verify [its] content for accuracy" in a bid to ensure that AI tools provide correct advice to people that treat the tools as a search engine.
“We’re working to ensure our public-facing resources and digital presence, like our websites, are optimised for easier AI consumption,” the report said.
“Our content governance processes will help ensure customers using AI tools are presented with information about Australian payments and services that is up to date, reliable and relevant.”
Services Australia also intends to “uplift agency information management practices” ensuring any AI “provide[s] consistent, quality outputs”.
“We’ll continue to optimise content in the public domain and explore new technologies, to build trust with Australians,” the strategy states.
Increasing sophistication
The strategy outlines the agency’s use of automation “to reduce and remove large amounts of repetitive, rules-based tasks”, which currently power 600 processes internally and customer-facing.
According to the strategy, around 95 percent of Services Australia’s automated workflows fall under the “rules-based” category.
These are broken down into three types: end-to-end process automation, partially automated processing involving manual inputs and information retrieval from high-volume data systems.
Services Australia is also expanding its focus into more sophisticated forms of automation, described as “adaptive” and “intelligent”, which the agency defines as encompassing predictive analytics, deep machine learning, and generative AI.
One use case cited in the report was Services Australia’s voice-enabled telephone routing service, which the agency said uses AI “to identify patterns in data” such as “themes and trends” within its digital assistant channels.
Earlier this year, Services Australia also revealed it is trialling machine learning to detect potential instances of identity theft affecting Centrelink customers, with the goal of stopping payments from being rerouted.
The report identified use cases in “debt prioritisation” and “fraud detection”. Services Australia’s CEO David Hazlehurst subsequently told senate estimates the agency has “no current plans to use AI” to make decisions about payment entitlements.
To this end, the new strategy asserts that all AI usage will be “human-centric, safe, responsible, transparent, fair, ethical and legal”.
As such, the strategy identifies six key priorities to guide its AI implementation: building trust; human-led initiatives; mature governance and investment frameworks; standardised legislation and simplified policy; uplifting workforce capability and capacity and, lastly, modular, connected and standardised platforms and systems.
On a technical front, Services Australia said it “will continue to review [its] technology stack to support [its] plans to improve technical resilience and minimise complexity”.
“Strong, secure infrastructure foundations will deliver services in times of need, whilst leveraging new and emerging technologies to scale innovative and uplifted services to staff and customers where it is beneficial,” the strategy states.
Services Australia is currently carving out a 10-year ICT architecture strategy and plan by June 2025, which it said will be a key step towards modernising systems at the agency.
"We are focused on how our technology and digital capabilities progress us towards our future state...,” it added.
“In doing so, our goal is to provide a trusted and integrated portfolio of whole-of-government digital and legacy technology, delivering data and functionality that maintains safe, secure and resilient capability that can be scaled to meet demands.”
Robodebt legacy
Services Australia’s strategy comes in tandem with other federal policies, such as the responsible use of AI in government and the national framework for the assurance of artificial intelligence in government.
In addition, the Attorney-General’s Department (AGD) is currently drawing up a standard legislative framework to be used on a whole-of-government basis.
This forms part of the federal government’s response to the Royal Commission into the Robodebt Scheme, which highlighted serious governance failures and the misuse of automation in determining welfare debts.