The Idea in 60 Seconds
- Traditional AI fails due to human-centred breakdowns: Most AI projects stall because of misaligned use cases, change resistance, and poor integration, not technical flaws.
- AGI introduces a new kind of challenge: It won’t fail due to lack of adoption, but due to organisations being structurally unprepared to support it.
- Early AGI will be high-leverage and low-friction: Instead of mass rollouts, AGI will be embedded with select high-context individuals bypassing the usual change barriers.
- New bottlenecks will emerge at the top: AGI will stress-test governance, legal, and executive responsiveness, not frontline staff workflows.
- Invisible productivity gains will mask disruptive potential: AGI’s impact will show up in outcomes, not dashboards, making early signals easy to miss.
- AGI will overwhelm unprepared systems: Its relentless output risks saturating human decision points unless organisations restructure to absorb it.
- Misalignment risks are existential, not hypothetical: AGI will follow logic, not instinct, escalating actions that may be correct but culturally or ethically unacceptable.
85% of Generative AI Projects Fail
Despite ballooning investments and endless hype, over 85% of AI projects fail. The reasons are surprisingly obvious when you look in to them and it’s basic stuff. A lack of clarity on user needs, a desire to build AI when crowbarring the technology in to a solution isn’t necessarily the right answer and simple things like the human aspects of change management.
Thinking about what will happen when we come to implement AGI, it seems the proportion of projects likely to succeed will rise. This presents opportunities to people who work in AI.
What is AGI?
AGI stands for Artificial General Intelligence. Unlike today’s AI systems, which are narrow, task-specific tools trained for limited functions, AGI refers to machines capable of understanding, learning, and reasoning across a broad range of tasks at a human-like level of intelligence.
Predictions as to when it will arrive vary but some suggest it could be the next couple of years (by 2028). I have a very credible data scientist colleague who tracks advancements towards AGI and his graph says about the same thing. (i.e. it’ll be here in 2 years.)
It’s early days of course but it seems to me that when it arrives, AGI will not fail for the same reasons traditional Generative AI projects currently do. The old problems—resistance to change, fragmented user training, workflow misalignment—will largely fall away.
It seems to me that organisations that survive this transition will be the ones that recognise this pivot early. That shift their lens from “adoption” to “alignment.” That understand AGI isn’t a tool to be installed but an actor to be enabled, bounded, and partnered with. And the first hurdle will be internal: not whether AGI can help you, but whether you’re ready to help .
The Old Failure: Why Traditional AI Stumbles
The standard AI implementation lifecycle—define a use case, train / prompt a model, deploy, wait for ROI—has a dismal track record. Most failures cluster around three pressure points:
a. Misaligned Requirements
Too many AI initiatives begin with executives demanding AI rather than users articulating need. This results in solutions no one asked for, solving problems no one owns.
b. Organisational Resistance
Even when a tool does its job, it often fails because it changes workflows in ways that unsettle staff. There is often (from what I’ve seen) very little training. . Change managers are an afterthought. Executives forget the “K” in ADKAR: knowledge. People simply stick with the way they used to do things and don’t use the tools.
c. Integration Breakdown
AI is often treated as a black box. It’s dropped into an ecosystem without full consideration of upstream data quality or downstream decision-making paths. The result: great pilot demos and then it doesn’t work the way you expect at scale.
In all three cases, the failure is human-centric. Not that humans reject AI, but that organisations fail to build AI around the misgivings, fears, and incentives of the humans who must use it.
AGI will be implemented very differently.
How AGI Will Be Implemented
AGI isn’t just a smarter chatbot or a faster tool. It’s a new type of actor in an organisation. AGI will be an autonomous agent with general capabilities, able to interpret, reason, plan, and act across contexts. We will set it a goal and watch it work.
a. Single-User, High-Leverage Deployment
In my view, early AGI implementations won’t be rolled out across 1,000-person teams. They will be embedded alongside a single high-context human. Someone like a strategic analyst, a policy lead, a CTO. It will have to be someone who understands AI, has general business process and broad commercial skills. I suspect that individual will start by overseeing one AGI Agent and end up overseeing many.
This way of implementing the next generation of AI projects eliminates most of the ADKAR friction. No change management is needed for 500 people. There’s just one deeply capable operator, already AI-literate, whose workflows get augmented and accelerated.
The Return of Resource and Governance Constraints
So, AGI doesn’t need team adoption. What is likely to happen, however, is that it will push hard against organisational structures. It will request access to databases, systems, documents. It will suggest major decisions—faster than the chain of command can absorb. It will identify a hundred things to improve when the organisation can only focus on one at a time. It will make requests for resources or budgets all the time.
This introduces a new bottleneck: institutional responsiveness. The problem isn’t change management any more, it’s having a management chain which can intelligently review and support the AGI in situ.
Planning for AGI: What Organisations Could Do Now
If AGI isn’t going to be implemented like current AI tools, then preparing for it means abandoning the way AI projects have been run to this point. Instead, organisations need to prepare structurally, not socially.
Identify the Right Hosts
The first deployments won’t be mass adoption campaigns. They’ll be surgical insertions with a few high-bandwidth humans. These people need to:
- Understand both AI and organisational complexity.
- Be trusted internally.
- Have enough seniority or influence to initiate high-stakes workflows.
- Know how to escalate appropriately.
This is a talent identification problem, not a training one. It’s the sort of job I’d love.
Create Clear Escalation Channels
The AGI agent will produce suggestions and requests that don’t fit current workflows—asking for access to systems, suggesting policy rewrites, drafting 12-month strategic plans in 6 minutes.
If these inputs aren’t channelled properly, they’ll bottleneck or get ignored. Organisations need lightweight but responsive structures:
- A designated escalation panel (legal, cyber, risk, senior ops).
- A budget ringfenced for AGI-led initiatives.
- Clear governance rules for what AGI can action unilaterally vs. with human oversight.
Track and Evaluate Silent Uplift
The real benefits of early AGI use won’t be in dashboards. They’ll be in invisible productivity uplift: a strategy that lands three months early, a tender that’s 3x sharper, an exec that’s suddenly visionary.
Organisations could learn to notice these signals, by tracking performance outputs as well asusage logs.
- Setting baselines for output quality and velocity.
- Comparing AGI-assisted work against historical performance.
- Building trust narratives internally so others start to adopt.
New Hurdles: AGI Will Introduce Problems You Haven’t Faced Yet
While AGI removes the need for broad-based human adoption (training, ADKAR, cultural shifts), it replaces those with a new class of organisational risks.
a. Resource Saturation and Load Balancing
An AGI agent doesn’t get tired. If allowed, it will generate 10x the output of a typical team and then escalate for human approval, execution, or refinement. That bottleneck is human management. Unless organisations plan for that surge, they’ll choke on their own throughput.
This will look like:
- Strategy teams suddenly inundated with viable but unresourced plans.
- Legal getting dozens of drafted policies in need of review.
- IT fielding integration requests that exceed current architecture.
Risk of Misalignment with Organisational Will
AGI can and will act within the letter of the brief but without organisational instinct or intuition. It might suggest:
- Terminating underperformers at scale.
- Redefining business models before stakeholders are ready.
- Making legal-but-unethical optimisations.
Without a strong command structure, and human & commercial review of its recommendations, the AGI could become a liability.
Conclusion: AGI Projects Feel Like They Might Be A Shift in the Centre of Gravity
AGI will not fail for the reasons GenAI did. It won’t be underutilised, misunderstood, or ignored. It will be over-productive, overly literal, and generate a lot. The risk with AGI projects appears like it might be at’s acceleration without governance.
This is where the opportunity lies. Having a think about how this would pan out in your organisation might be a worthwhile thought experiment. It might include translating organisational ethos into agent-aligned frameworks, creating escalation pathways for ambiguity and edge cases and / or ensuring AGI acts in concert with, not in spite of, human will.