You know what's broken.
Now fix it, layer by layer.
The Foundation Review told you exactly where your gaps are and in what order to address them. A Layer Build Programme is how you close those gaps — with structured, fixed-price work that produces deliverables you own, and foundations that AI can actually perform on.
The layers aren't independent. They depend on each other.
AI does not perform in isolation from the data it reads, the systems it connects to, the processes it follows, or the governance structures that oversee it. Neither do the layers.
Layer 3 — workflow and process — depends on Layer 2 having reliable data flows in place. Layer 2 depends on Layer 1 providing data worth flowing. You cannot automate a process that has never been written down. You cannot connect systems that have nothing consistent to share.
This is not a commercial constraint. It is the architecture. Attempting to build Layer 3 on a broken Layer 1 produces the same failure the Foundation Review was commissioned to prevent.
Your Foundation Review roadmap specifies which layers need attention and in what order. That sequence governs how the Layer Builds are commissioned.
The layer AI reads from.
Every AI capability your business deploys performs exactly as well as the data underneath it. Not better.AI systems are only as useful as the data they operate on. Fragmented records, inconsistent field definitions, ungoverned pipelines, and data no one quite trusts mean your AI model is working from a broken picture — and producing outputs that reflect that brokenness at scale. This layer build fixes the infrastructure underneath.
What the work covers
- Data audit across all business-critical systems — mapping where data lives, how it is structured, who owns it, and how it flows between systems. Surfacing duplication, contradiction, and gaps.
- Data quality scoring by domain — assessing completeness, consistency, freshness, and accuracy for the data sets your AI use cases depend on. Scored against a five-point readiness scale.
- Root cause analysis per gap — tracing quality failures to their source rather than treating symptoms. Process breakdowns, tooling issues, ownership vacuums, and ingestion failures identified and documented.
- Data governance framework — defining ownership, access controls, quality standards, and update frequency obligations for each data domain. Written as a document your team can maintain.
- Remediation plan with sequenced priorities — specific actions ranked by impact on AI readiness, with effort estimates and dependency flags. Not a general recommendation — a specific plan for your situation.
- AI-readiness certification for priority data assets — formal confirmation that the data sets required by your planned AI use cases meet the quality threshold necessary for reliable model performance.
Who needs to be involved
What this layer does not include
- Building or rebuilding data pipelines or warehouse infrastructure
- Hands-on data cleaning or migration work
- Selecting or procuring data tooling or platforms
The layer AI connects through.
AI agents operate across systems. They do not live inside a single application. Disconnected platforms mean the agent can only see part of the picture — and acts on that partial view.Most businesses run twenty or more SaaS tools that were never designed to speak to each other. AI sits on top of that complexity and inherits every broken connection, every delayed sync, every field that means different things in different systems. This layer build maps, rationalises, and prepares the integration layer before AI deployment begins.
What the work covers
- Full SaaS stack audit and dependency map — every platform in operational use catalogued, with data flows, integration status, ownership, contract terms, and renewal dates documented in a single reference document.
- Integration risk identification and classification — each integration assessed for stability, data consistency, latency, and single-point-of-failure risk. Priority risk register produced and owned.
- API readiness assessment for AI connectivity — evaluating whether your platforms expose the APIs and data access patterns that your planned AI use cases require. Gaps documented with vendor escalation guidance where needed.
- Rationalisation recommendation — independent analysis of which platforms in your stack serve overlapping functions, where consolidation would reduce integration complexity, and what the sequencing of any rationalisation should look like.
- Integration architecture specification for AI — a documented target state describing how your systems need to share data for your priority AI use cases to function reliably. Produced as a specification your technical team or implementation partner can build to.
Who needs to be involved
What this layer does not include
- Building integrations or writing integration code
- Vendor negotiation or contract renegotiation
- Platform migration or procurement decisions
The layer humans and AI share.
You cannot automate a process that has never been written down. And you cannot build an AI assistant on knowledge that has never been captured.This is the most overlooked layer — and the one that causes the most implementation failures. AI does not remove work. It redistributes it. Without redesigned workflows and a structured change management programme, AI tools get adopted in pockets and never embedded in practice. The technology works. The people don't use it. This layer build fixes that.
What the work covers
- End-to-end workflow mapping for AI-target processes — taking your three to five highest-priority automation candidates from input to output, documenting every step, decision point, exception, and handoff in a format that can be acted on by a human, a tool, or an AI agent.
- Decision logic extraction — surfacing the unstated rules, judgement calls, and contextual knowledge that experienced staff apply but have never documented. This is often the highest-value work in the layer: the information that currently lives only in people's heads and leaves the business when they do.
- Human-AI handoff design for each use case — defining precisely where the AI operates autonomously, where human review is required, where escalation occurs, and who is accountable for each output. Not a principles document — a specific decision for each process.
- Change management programme (8-stage) — a structured approach to embedding AI in operational practice that addresses resistance, builds confidence, and establishes the habits necessary for adoption to stick rather than fade after the first few weeks.
- Role and responsibility redesign — where the introduction of AI changes what a role requires day-to-day, we redesign the role description and accountability framework accordingly. Not a redundancy exercise — a re-scoping exercise.
- Adoption metrics framework and 90-day monitoring plan — defining what successful adoption looks like in measurable terms, with a specific monitoring structure for the first ninety days after AI deployment to catch adoption drift before it becomes entrenchment.
Who needs to be involved
Why this layer takes longest
- Decision logic extraction requires extended structured interviews — the knowledge takes time to surface
- Change management runs in parallel with workflow mapping and must be paced to the organisation
- Human-AI handoff design often requires iteration once draft processes are reviewed by operational staff
The layer leadership is accountable for.
Deploying AI without a governance framework is not a technology risk. It is a liability risk. And unlike most risks, it compounds silently until something goes wrong in public.Regulatory exposure, data protection failures, model output errors that reach clients, bias baked into automated decisions — every one of these is preventable with the right governance structure in place before deployment begins. This layer build produces that structure: specific, owned, and documented. Not a policy that sits in a folder. A framework that actually governs how AI is used in your business.
What the work covers
- AI governance policy — ownership, accountability, escalation — a written policy governing how AI tools are adopted, assessed, used, and monitored in your organisation. Includes a named accountability structure and clear escalation paths for failures and disputes.
- UK AI Act and GDPR exposure assessment — mapping your planned AI use cases against current and incoming regulatory obligations. Identifying high-risk use cases that require enhanced oversight and data processing activities requiring review under UK GDPR.
- Model risk assessment framework — a repeatable process for assessing the risk profile of each AI tool or model before deployment, covering output reliability, data handling, vendor accountability, and update cadence.
- Audit trail design — defining what needs to be logged, by whom, and for how long, to support both regulatory compliance and internal accountability. Covers AI-generated outputs, human review decisions, and model version changes.
- Bias, transparency, and explainability framework — establishing what your organisation requires in terms of understanding and being able to explain AI-generated outputs, with specific requirements per use case based on regulatory exposure and client-facing risk.
- Board-level AI risk reporting structure — defining what AI risk information should be reported to the board, at what frequency, in what format, and with what ownership. Includes a first-edition board AI risk summary completed for your current tool set.
Who needs to be involved
An important distinction
- We identify regulatory exposure and design governance frameworks. We do not provide legal advice. For legally binding assessments, you will need your legal counsel to review and ratify our findings.
- The L4 price range is wider than other layers because governance complexity varies significantly by sector and by the number and nature of AI use cases in scope.
These programmes fix readiness gaps. Not everything.
There are four things clients sometimes expect from a Layer Build that we want to be clear about before the engagement begins.
We do not build the AI implementation
A Layer Build produces the foundations AI needs and the documentation your implementation partner or internal team needs to build correctly. We do not build the AI system itself, write production code, or manage deployment. When the foundations are ready, that work is yours to commission from whoever is best placed to do it.
Layer Builds do not guarantee AI success
They guarantee that the specific gaps identified in your Foundation Review are addressed. AI outcomes depend on many factors — model capability, use case selection, implementation quality, and market conditions among them. What Layer Builds guarantee is that the foundations will no longer be the reason it fails.
We will tell you if the build reveals additional gaps
The Foundation Review is a structured assessment, not a forensic audit. Layer Builds sometimes surface gaps that the Review could not see without deeper access. When that happens, we tell you directly and discuss the implications for your roadmap. Scope changes require a new conversation — we do not expand engagements without agreement.
The work requires your team's genuine involvement
A Layer Build is not something we do to your business while it carries on around us. Decision logic extraction, process mapping, governance accountability, and change management all require meaningful participation from the people who run your operations. We will tell you in the scoping call exactly what that involvement looks like.
All four layers. One coordinated engagement.
For businesses where the Foundation Review identifies gaps across all four layers, the Full Foundation Programme delivers a coordinated engagement — one team, one methodology, one price — over five to seven months. Layers build on each other as they complete, so the work is sequenced correctly and each layer informs the next.
The Full Programme is priced at a meaningful reduction on the total of individual layer commissions. It also includes the Foundation Review itself, so the entire journey from initial diagnostic to all four foundations built is covered in a single engagement with one agreed price before anything begins.
What people ask before committing to a Layer Build.
If your question is not here, ask it during the scoping call. We answer everything directly.
The Foundation Review tells you what to fix. Layer Builds fix it.
If you have already completed a Foundation Review and are ready to discuss the next step, use the Get Started form. If you have not yet done a Review, that is where to begin.