news

Making AI usable across the APS

Not sure where to start with AI? Here’s our tactical step-by-step guide to help your government organisation or department design and implement an AI strategy.

Decorative image

Sophie Wright

Head of Digital Transformation

Decorative image

Published: 22 January 2026

How might we use AI to improve the lives of Australian citizens and residents?

How can AI boost efficiency and productivity, without job cuts?

How do we fast-track AI experimentation and innovation, while maintaining privacy and security requirements?

These are just some of the questions I get asked when talking to leaders in government, and some of the challenges we’re helping government solve.

They highlight the messy, complicated reality, governments must navigate as they push to simultaneously regulate and adopt AI safely and ethically.

After speaking with a range of departmental staff implementing AI, and seeing the progress made while attending the Digital Transformation Agency’s AI Government Showcase in August, I know these questions can be answered.

And I’m excited for what’s next.

Thoughtful AI adoption starts with a thoughtful strategy

As Head of Digital Transformation at Jude, I’ve been experimenting with and implementing AI, as well as monitoring its rapid adoption.

Its potential is limitless.

But so too is its potential to destroy trust.

So, it’s particularly important for governments to make sure they’re setting up their people with the right tools, guardrails and support to succeed with AI.

One of the best ways to start is with an AI strategy.

The foundations of a strong AI strategy for government

There’s no ‘one’ way to do strategy.

But having developed many digital strategies for government clients before, I believe the single most important thing is to be clear and practical.

You want anyone and everyone, from an APS2 through to the Secretary, to be able pick up the strategy and run with it.

1. Define goals, metrics and use cases

Yes, your overall goal is likely the same as any digital or technical strategy — to set up the support, culture, processes and funding to encourage greater adoption and use of AI by your teams.

But what specific scenarios do you want your people to use AI for? Is it to increase internal productivity? Improve support to your customers? Creatively problem solve and scenario plan?

The goals will fall into two broad buckets:

  • For the Australian public (External)
  • For employees (Internal)

Use the Digital Transformation Agency’s Classification system for AI use to identify the use cases you’d like to focus on and tie metrics to them. (And yes, we will get to metrics later.)

Articulating these use cases will help:

  • show staff what AI use cases you’re most interested in, and therefore what ideas are more likely to get off the ground
  • direct effort and innovation in the right direction
  • inspire staff to consider use cases and goals they hadn’t considered
  • with writing your AI transparency statement, which must be documented and publicly available under the Policy for responsible use of AI in Government.

2. Set up your governance and ethics model, and align with Australia’s national approach

To make sure AI is used safely and securely, it’s best to take a whole-of-agency lens.

This involves setting up a governance committee that goes beyond just business and tech. Look to include experts from cyber security, information management, change management, human resources, risk and legal, so you consider AI from all angles.

For this committee, there are three key things to stay across:

I’ve spoken to quite a few different agencies and noticed they are applying a mix of governance approaches into their existing frameworks. There’s a mix of AI formal steering committees, adoption committees, working groups and AI ‘front door’ teams – all helpful for maintaining a clear line of sight to safe AI use and upskilling staff on transparent guardrails.

3. Security, safety and privacy must be top of mind

The national framework and AI principles address security, safety and privacy and I wanted to call these out explicitly because they’re so critical, especially within a Government context.

To continue to deliver trustworthy experiences for citizens, Government departments must be transparent and ethical in their use of AI. This means security and privacy must be top of mind, considered across the entire lifecycle of an AI solution, and be everyone’s responsibility — not just cyber security, risk and legal.

Part of this includes clarity on what you will and won’t use AI for, and what data can and cannot be fed into AI solutions. Communicating this clearly, providing dedicated training and setting up guardrails will be pivotal to address this and will help avoid the use of Shadow AI (i.e. using unsafe, unapproved AI tools).

4. Set up a centralised location for AI governance

This can be as simple as setting up a SharePoint site and inbox that’s monitored by a dedicated few. The idea is to have a centralised place that anyone in the agency can:

  • Read the agency’s AI policies and strategy
  • Get access to approved tools, resources and training materials
  • Pitch ideas
  • Ask questions and connect with experts

This should be paired with a Teams channel, regular catch-ups and showcases when the time is right.

5. Understand how AI is currently being used internally

Another important step will be understanding what and how AI is currently being used within your organisation. You’ll want to do a thorough audit to understand:

  • If and what AI tools have been embedded into your existing tech stack recently, to make sure they meet security requirements
  • How staff are currently using AI
  • How they’d like to be using AI

6. Upskill and train your staff

This is a key one that you’ll want to do in partnership with your HR, learning experts and change management teams. Thankfully there’s already some great training modules available through GovAI you can start with.

Some things to consider:

  • Provide dedicated time for staff and teams to experiment with AI
  • Provide 101 training so everyone has a foundational understanding of AI, but be sure to pair this with targeted training for technical teams and leadership
  • Training should include teaching staff about AI limitations so they can effectively assess outputs, how to ethically and responsibly use AI, and how to integrate AI tools into their daily work rhythms

7. Apply your risk model for future tools

Managing risk will be an important part of successfully implementing AI solutions.

So be sure to apply existing risk models or matrices whenever you identify new use cases for AI. A good resource is the DTA’s risk matrix for AI, which was developed to help agencies identify new, high-risk use cases.

8. Use a framework to prioritise efforts

Like any new operational expenditure, you will want to take an evidence-based approach to define the scope, schedule, budget, and benefits of AI solutions.

This is where a priorisation matrix will come in handy, giving you a clear-eyed view of where to focus effort, and how to manage the flood of requests and suggestions.

9. Measure and validate your efforts

As I flagged up top, metrics are an important step to measuring and validating success. How do you know it’s working, if you don’t measure it? Metrics will come down to what your goals and use cases are and should cover both quantitative metrics like return on investment, cost savings, and productivity uplift, as well as qualitative ones such as customer satisfaction, AI literacy amongst staff, and impact on decision-making processes.

Make sure you’re collecting this data to use it as part of your continuous improvement loops.

This might include online forms for qualitative measures, or quantitative metrics for how often people use AI, the number of tools they use, how much time might be saved using AI tools, how easy different AI tools are to use and more.

10. Set up continuous improvement loops

Alongside a culture of continuous improvement and learning, it’s important to put in place feedback loops so that you’re monitoring and improving AI adoption and solutions in your organisation.

Armed with your metrics, make sure you establish a regular review cycle and then chart a path forward for improvement.

Empowering staff to experiment and grow

Encouraging your staff to safely experiment with AI is the key to unlocking its potential in Government. And it’s not just about training up certain teams, but really training up everyone – particularly frontline staff or service delivery teams who interact with customers every day. I believe this is where your best ideas will come from.

If you have questions or want feedback on how best to implement your AI strategy, reach out to Jude today.