The Idea In 60 Seconds

  1. Shadow AI is to the unauthorised use of AI tools like ChatGPT by employees to enhance productivity, often without organisational approval.
  2. A 2025 survey (I’ve linked it below) revealed 25% of Australian public servants use shadow AI, driven by its simplicity and efficiency, especially for repetitive tasks.
  3. Shadow AI is coming in because of the gap between employee readiness to adopt AI and organisational preparedness to support its use.
  4. It creates a lot of risk.
  5. We should respond by acknowledging shadow AI, offering secure alternatives, educating employees, strengthening policies, and working with early adopters as knowledge hubs.
  6. Outright bans on personal AI use risk driving it underground.

Some More On Shadow AI

Not all AI use is happening in the open. An unusually prescient report titled “Our Gen AI Transition” from the Australian Government, this week (Jobs and Work Department ) describes the surprisingly widespread use of Generative AI, often without the knowledge or approval of organisations.

Shadow AI is the unauthorised use of AI tools, like ChatGPT or MidJourney, by staff to enhance their productivity. I think of it like ‘AI on the phone, under the desk.’ The problem is, people use these tools for a reason. They help them do their jobs more easily. That means it’s hard to stamp out the use and the risk. I don’t think the motivation is malicious. I just think they’re trying to do their job better.

the rise of shadow AI

The Rise of Shadow AI

The Jobs and Word report shows the startling prevalence of shadow AI in workplaces. According to a 2025 Mandarin survey, 25% of public servants across Australia are already using unauthorised AI tools to perform their duties. This figure is even higher among younger, tech-savvy workers and those in white-collar roles performing repetitive tasks.

The benefits of shadow AI are obvious, it saves people time, reduces workload, and helps with productivity. One Queensland public servant quoted in the report said that using AI allowed them to “do three people’s jobs” by automating low-value tasks like formatting reports or drafting meeting terms of reference. Obviously, the convenience comes at a cost.

Why Shadow AI is Growing

The first thing is the simplicity, accessibility and widespread use of tools like ChatGPT in people’s personal lives. Many employees view these tools as faster and more intuitive than official systems. Why wouldn’t they use them?

The increase of Shadow AI is partly due to a gap between employee readiness and organisational preparedness. The same Mandarin survey found that 84% of public servants are eager to use AI tools to improve their efficiency, but only 25% believe their organisation is ready to support AI adoption. The disconnect leaves employees to find out their own solutions.

The Risks of Shadow AI

Unauthorised use of AI introduces a lot of risks, particularly around data security, privacy, and compliance. Here are the most pressing concerns:

  1. Data Privacy Breaches
    Employees might upload sensitive or classified information into external AI tools without realising the implications. Models like ChatGPT, store data on servers that may be located internationally, outside Australian jurisdiction. This creates a risk of data breaches and non-compliance with privacy laws.
  2. Lack of Oversight
    Shadow AI is happening outside organisational AI governance frameworks, meaning there is no way to monitor or audit its use. This lack of transparency can lead to errors, biased outputs, or even the misuse of AI-generated content.
  3. Reputational Damage
    If the unauthorised use of AI becomes public knowledge, it can erode trust in the organisation.
  4. Erosion of Human Skills
    Over-reliance on AI for routine tasks can lead to the “deskilling” of employees. As one Queensland public servant warned in the Jobs and Word report, “The APS and QPS are already facing the down-skilling of individuals; it is important to keep exercising our strategic brains.”
  5. Algorithmic Bias and Misinformation
    Generative AI tools make mistakes and hallucinate. Not all staff know what hallucinations are. AI models can produce biased or inaccurate outputs, which, if relied upon, could lead to flawed decision-making or operational errors.

How We Can Respond

Proactively.

  1. Acknowledge the Reality
    Shadow AI is only going to grow. Organisations must recognise its prevalence and understand why employees are turning to these tools.
  2. Provide Secure Alternatives
    Developing or deploying sanctioned AI tools, like QChat (I use it myself, I know the guy who built it, it’s amazing) from the Queensland Government in Queensland, can offer employees a secure and compliant alternative to shadow AI.
  3. Educate Employees
    Training programs should focus on the risks of shadow AI and the importance of using approved tools. Employees need to understand the potential consequences of unauthorised AI use and the basics like hallucinations and ethics. (More ideas on how to do this below.)
  4. Strengthen Policies
    Creating clear guidelines in the IT Department on “Acceptable AI use” isn’t hard to do – I did it. The IT department already has tools and processes for monitoring and enforcement. Together, these can help mitigate at least some of the risks. These, policies should explicitly prohibit uploading sensitive data into external AI tools. Unfortunately, it doesn’t stop people using AI on their personal assets – private laptops or phones and emailing the results to themselves.
shadow AI early adopters
  1. Harness The Early Adopters
    It’s good that there’s so much employee. My experience is that a subset of staff, maybe 10%, are deeply passionate about AI. We should train them and set them up as distributed knowledge hub of good practice that ‘normal people’ (the early and late majority, laggards) can ask questions of when they come up. They’re likely to be ‘in the real world’, seeing people do these things – and can offer sensible, peer to peer advice on the risks in situ.

Let’s Not Push It Underground

As always, there has to be a balance between empowering employees with AI and safeguarding data, privacy, and maintaining the public trust. (Which is the whole point of having ethical frameworks and AI Policies.) Outright banning of personal use of AI risks pushing it ‘underground’, i.e. people will still do it, under the table, on their phone.

I wrote about Embedded AI and the risks it represents, back in July. This new report and these statistics, highlight a second axis of risk. Both are pernicious.

To me, they underline the need for organisations to take AI Governance seriously, resource AI teams to manage the situation responsibly and roll out broad, thorough training on AI, covering ethics and the basics like hallucinations, if only to the motivated early adopters at this stage.