Arrochar Consulting
ARROCHAR
CONSULTING
← Insights|AI Strategy & Leadership

The Three-Person AI Delivery Team: What Large Organisations Need to Restructure Now

Arrochar Consulting·2026-03-23·12 min read

There is a restructuring happening inside enterprise delivery teams. It is not a pilot. It is not an experiment. And for the organisations that recognise it early, the advantage is structural - in cost, speed, and the ability to deliver AI work packages that actually land.

The model is this: a Product Manager, an Enterprise Architect, and an AI Engineer. A Design Thinker as a shared overhead across units. Small, capability-aligned delivery teams. And a clear-eyed view of what it costs to get this wrong - including the stranded assets already accumulating on technology balance sheets.

This is not about reducing headcount. It is about optimal configuration. It is about asking what roles AI now performs inside a delivery team and restructuring around what remains irreducibly human. If you are a CIO, a transformation lead, or a program director, this question is not theoretical. It is a structural decision with a window.

What Changed - and Why It Changed Fast

Until recently, delivering a meaningful capability uplift inside a large organisation required a layered team. A business analyst to capture requirements. A solutions architect to design the system. Developers to build it. QA to test it. A project manager to coordinate all of the above. Often a scrum master. Frequently a change manager. The team was large because every handoff between roles created a seam where something could go wrong.

AI has not eliminated that complexity. It has dramatically reduced the cost of execution at each of those seams.

An AI Engineer working with a modern agentic stack - large language models, retrieval pipelines, automation tooling, and code generation - can now produce what previously took three developers. Not because the AI Engineer is more skilled than those three developers, but because the tools they are operating have compressed the distance between intent and output. Code generation, test generation, documentation, API integration, data pipeline construction - each of these is now meaningfully faster. Not perfect. But fast enough to change the team model.

The cost reduction story is not "we replaced people." It is: "we restructured what people do because the execution layer is no longer the bottleneck."

This Is Not About Headcount. It Is About Configuration.

Before going further, this point deserves to stand on its own.

The three-person delivery team is not a redundancy exercise wearing a transformation costume. Organisations that approach it that way will dismantle capability they need and rebuild it two years later at greater cost.

The argument is different. Large organisations are currently configured to deliver projects. Waterfall or agile, the dominant model is: define scope, build a team, deliver, disband. That model was designed for a world where execution was the expensive, time-consuming constraint. When you have to write every line of code by hand, when every requirement needs a human to translate it into a specification, when every test needs a person to write and run it - large teams make sense.

AI has moved the constraint. Execution is no longer the bottleneck. Clarity of problem, architectural coherence, and judgment about what to build - these are now the constraints. The team you need is smaller, sharper, and differently skilled. Not fewer people doing the same thing. Different people doing a fundamentally different thing.

The Delivery Unit: Four Roles, One Work Package

The Product Manager owns the problem. In a lean team, this role becomes more important, not less. Without a large delivery team absorbing uncertainty, the PM's ability to define a crisp work package - clear scope, clear acceptance criteria, clear business value - is the single biggest determinant of whether the engagement delivers. PMs who are outcome-focused and scope-disciplined thrive in this structure. PMs who rely on the team to figure out what they meant do not.

The Enterprise Architect owns the system. In a lean team, the EA also absorbs a significant portion of what the Business Analyst previously did. Current and future state capability modelling, business process analysis, requirements structuring - these are within the EA's natural domain. The EA operating at pace, informed by AI tooling, can produce artefacts in days that previously took weeks. The EA is also the role that ensures work packages fit together - that the AI microservice being built today integrates coherently with the platform being built next quarter. Without this, you build fast and accumulate technical debt faster.

The AI Engineer is the new full-stack practitioner. This is not a developer who has learned to use ChatGPT. The AI Engineer understands prompt engineering, model selection, agentic workflow construction, retrieval-augmented generation, fine-tuning, and evaluation methodology. They understand what AI can and cannot do reliably in a production environment. They bridge the technical and the applied - able to build, validate, and iterate at a speed that changes what a work package can deliver.

The Design Thinker sits as a shared overhead across delivery units rather than embedded inside each team. This role brings human-centred design capability to AI work - and it is more important in the AI context than it was in traditional delivery. AI microservices that are technically correct but poorly designed for human consumption fail at adoption. The Design Thinker ensures that how the AI behaves from a user's perspective - how it presents outputs, how it handles edge cases, how it fits into a workflow rather than disrupting one - is considered at the point of design, not retrofitted after the fact. One Design Thinker supporting three or four delivery units is typically the right ratio.

Organise Around Capabilities, Not Projects

The delivery unit model only reaches its potential when the organisation stops structuring technology delivery around projects and starts structuring it around capabilities.

A capability-based delivery unit owns a domain: customer intelligence, regulatory reporting, workforce management, operational analytics. It is persistent, not project-bounded. It continuously evolves the capability it owns using lean team principles - running work packages sequentially or in parallel, building a coherent capability rather than a collection of disconnected outputs.

This is a meaningful shift. Project-based delivery creates orphaned outputs - systems built, handed over, and slowly degraded by teams that had no part in building them. Capability-based delivery creates ownership. The team that built it maintains it, evolves it, and understands it at depth.

For large organisations, this means restructuring technology functions away from traditional domain silos - infrastructure, applications, data, security - toward capability-aligned delivery units that cut across those domains. Each unit operates with its own PM, EA, and AI Engineer. Each unit draws on the shared Design Thinker and governance functions as needed.

This is not a small organisational change. It is a fundamental restructure of how technology functions are designed. But organisations that do not make this shift will find that their technology function is structurally incapable of delivering AI at the pace the business requires.

The Stranded Asset Risk Nobody Is Talking About Loudly Enough

Here is a risk that deserves more senior attention than it is currently receiving.

Most large organisations are mid-cycle on enterprise software contracts that were procured before the current generation of AI was a consideration. ERP platforms. CRM systems. Case management tools. Document management platforms. Workforce management software. These were purchased on multi-year agreements, often with significant implementation investment, and they represent a substantial portion of the technology estate.

The risk is not that these platforms are bad. Many of them perform their core functions adequately. The risk is that they are not AI-native - and the vendors who built them are responding to that gap by bolting AI features onto architectures that were not designed for AI. Copilots sitting on top of legacy data models. Generative interfaces layered over batch-processing backends. AI-branded capabilities that require clean, structured data the organisation does not have and the platform was never designed to produce.

When an organisation buys into these features - through renewals, through upgrades, through additional modules - they are making a bet that the vendor's AI roadmap will deliver. Some of those bets will pay off. Many will not. And the cost is not just the licensing. It is the opportunity cost of not building the AI-native capability that actually serves the business need, and the integration debt of trying to connect AI outputs from a vendor platform to the broader capability architecture.

The stranded asset risk is real. A technology estate full of AI-adjacent software that cannot actually deliver AI outcomes is a strategic liability. The Enterprise Architect's role in assessing and managing this risk - mapping the estate, identifying which platforms have genuine AI capability and which have AI labelling, and advising on where to build versus buy - is one of the most important functions in the organisation right now.

Leaders need to ask: how much of our current software spend is on platforms that are being positioned as AI-capable but are not structurally designed to deliver it? What does the stranded asset scenario look like in three years if those vendor roadmaps do not materialise? And what is our governance process for evaluating AI claims from existing and prospective vendors?

On the BA Question

Business analysis work does not disappear in this model. It migrates.

Some of it migrates to the EA, who now has both the framework and the tooling to do structured requirements analysis as part of system design. Some of it migrates to the AI Engineer, who can run a workshop transcript through a language model, synthesise themes, generate draft user stories, and have a requirements artefact in hours. Some of it migrates to the PM, who in a leaner team takes more direct ownership of user needs and stakeholder alignment.

What changes is the BA as a standalone, full-time role embedded in every delivery unit. In organisations where business change is complex - multi-stakeholder, cross-jurisdictional, high-stakes - a senior BA or change analyst brought in for a defined phase still adds real value. The question is whether that is a core team member or an advisory input. For most AI work packages in a lean delivery model, it is the latter.

The Technology Restructure Is a Prerequisite

It is worth being direct about the structural implications of all of the above.

Large organisations cannot deliver AI capability at pace if their technology function is still organised around the assumptions of the previous era. Infrastructure teams that do not understand AI infrastructure. Applications teams structured around software delivery rather than capability ownership. Data functions that sit separately from delivery rather than embedded within it. Security and architecture functions that operate as slow-moving gatekeepers rather than embedded advisors.

The organisational structure of the technology function is either an accelerant or a constraint. For most large organisations today, it is a constraint. Not because the people are wrong. Because the structure was designed for a different kind of work and has not yet adapted.

Restructuring around capability delivery units is part of the answer. But it requires accompanying changes: governance that operates at the speed of AI delivery, architecture standards that guide rather than constrain, security frameworks that assess AI-specific risk rather than applying legacy controls to new contexts, and data capability that is genuinely integrated into delivery rather than treated as a separate lane.

This is the hardest conversation in enterprise technology right now. It is also the most important one.

The Talent Question Leaders Are Not Asking Yet

Most large organisations do not yet have an AI Engineer in the sense described above. They have developers who are learning. They have data scientists who are adapting. They have vendors who are proposing. What they rarely have is someone who has built and delivered production AI capabilities at the intersection of enterprise architecture, business requirements, and applied model engineering - and who can operate at the pace this team model demands.

The EA who can operate in this structure is also not common. The EA who thrives in a lean AI team combines system thinking with hands-on engagement - able to produce a reference architecture, facilitate a requirements session, and review an AI Engineer's output for architectural fit in the same week.

The Design Thinker who understands AI microservice design - not just UX design, but the specific challenge of designing interactions and outputs for AI systems that behave probabilistically - is genuinely rare.

And the PM who can hold this team together with the precision this model demands is the rarest of the three.

The conversation about the capability delivery unit model is ultimately a talent conversation. Do you have these roles? Can you develop them internally? Can you access them through a partner who can operate this way - and transfer that capability into your organisation while doing so?

That is the question worth sitting with.

Is This the New Way of Working?

For AI work packages inside large organisations - analytics platforms, agentic workflow tools, data governance tooling, AI-enabled process automation, reporting infrastructure - the capability delivery unit model is not a future state. It is a current capability that the organisations understanding it are already deploying.

The organisations that move on it now will have structural advantages that compound: better-configured teams, cleaner technology estates, capability ownership that actually persists, and the institutional knowledge of having built and operated this model rather than having read about it.

The question for leaders is not whether the model works. It demonstrably does. The question is whether your organisation is structured to use it - whether your technology function can be reorganised around capabilities, whether your EA and AI Engineer talent exists or can be built, whether your software estate is genuinely AI-capable or merely AI-labelled, and whether your leadership team has the appetite to restructure delivery rather than simply accelerate the existing model.

Accelerating the existing model will not be enough.

Arrochar Consulting works with government and enterprise organisations on AI readiness, data strategy, and delivery architecture. If you are thinking through team structure, capability delivery models, or technology estate risk for AI transformation programs, we are happy to compare notes.

Ready to build the foundations that make AI actually work?

Book a free consultation. We'll map your current AI readiness, identify your biggest gaps, and give you a clear picture of where to start.

The 'No Pitch' Promise

This is a 30-minute diagnostic call, not a disguised sales pitch. If at the end of the 30 minutes you feel we wasted your time with fluff or aggressive selling, tell me and I'll immediately send $100 to the charity of your choice.

Actionable Blueprint Guarantee

By the end of our 30-minute consultation, you will have a minimum of 3 actionable steps to reduce your shadow AI risk and formalize data governance - whether you ever work with us or not.