Skip to main content

You know what's broken.
Now fix it, layer by layer.

The Foundation Review told you exactly where your gaps are and in what order to address them. A Layer Build Programme is how you close those gaps — with structured, fixed-price work that produces deliverables you own, and foundations that AI can actually perform on.

Layer Build Programmes are commissioned after the Foundation Review is complete. The Review identifies which layers need attention and sequences the work correctly. We do not sell Layer Builds without it.
Why order matters

The layers aren't independent. They depend on each other.

AI does not perform in isolation from the data it reads, the systems it connects to, the processes it follows, or the governance structures that oversee it. Neither do the layers.

Layer 3 — workflow and process — depends on Layer 2 having reliable data flows in place. Layer 2 depends on Layer 1 providing data worth flowing. You cannot automate a process that has never been written down. You cannot connect systems that have nothing consistent to share.

This is not a commercial constraint. It is the architecture. Attempting to build Layer 3 on a broken Layer 1 produces the same failure the Foundation Review was commissioned to prevent.

Your Foundation Review roadmap specifies which layers need attention and in what order. That sequence governs how the Layer Builds are commissioned.

Layer 4
Governance & Risk
Depends on L3 — cannot govern what has not been documented
Layer 3
Workflow & Process
Depends on L2 — cannot document handoffs between disconnected systems
Layer 2
Systems & Integration
Depends on L1 — cannot connect systems that share inconsistent data
Layer 1
Data Infrastructure
The floor. Everything above depends on this being sound.
Foundation — fix from the bottom up
L1
Layer One — Data Infrastructure

The layer AI reads from.

Every AI capability your business deploys performs exactly as well as the data underneath it. Not better.

AI systems are only as useful as the data they operate on. Fragmented records, inconsistent field definitions, ungoverned pipelines, and data no one quite trusts mean your AI model is working from a broken picture — and producing outputs that reflect that brokenness at scale. This layer build fixes the infrastructure underneath.

Fixed price £12k–£16k + VAT · 6–8 weeks

What the work covers

  • Data audit across all business-critical systems — mapping where data lives, how it is structured, who owns it, and how it flows between systems. Surfacing duplication, contradiction, and gaps.
  • Data quality scoring by domain — assessing completeness, consistency, freshness, and accuracy for the data sets your AI use cases depend on. Scored against a five-point readiness scale.
  • Root cause analysis per gap — tracing quality failures to their source rather than treating symptoms. Process breakdowns, tooling issues, ownership vacuums, and ingestion failures identified and documented.
  • Data governance framework — defining ownership, access controls, quality standards, and update frequency obligations for each data domain. Written as a document your team can maintain.
  • Remediation plan with sequenced priorities — specific actions ranked by impact on AI readiness, with effort estimates and dependency flags. Not a general recommendation — a specific plan for your situation.
  • AI-readiness certification for priority data assets — formal confirmation that the data sets required by your planned AI use cases meet the quality threshold necessary for reliable model performance.

Who needs to be involved

Client-side stakeholders we will need to work with
Operations lead IT / systems owner Data or BI team CRM / platform owner Department heads (data sources) Finance (transactional data)

What this layer does not include

  • Building or rebuilding data pipelines or warehouse infrastructure
  • Hands-on data cleaning or migration work
  • Selecting or procuring data tooling or platforms
What you leave with
Data Audit ReportFull layer-by-layer assessment of your data estate, scored and evidenced.
Governance FrameworkWritten ownership, standards, and accountability document — maintainable without us.
Remediation RoadmapSequenced action plan with effort estimates and dependency flags per item.
AI Readiness CertificationFormal sign-off on priority data assets confirming readiness for AI deployment.
Stakeholder Readout90-minute presentation of findings and next steps with your leadership team.
L2 Readiness SignalClear guidance on whether your data state supports progressing to Layer 2 work.
L2
Layer Two — Systems & Integration

The layer AI connects through.

AI agents operate across systems. They do not live inside a single application. Disconnected platforms mean the agent can only see part of the picture — and acts on that partial view.

Most businesses run twenty or more SaaS tools that were never designed to speak to each other. AI sits on top of that complexity and inherits every broken connection, every delayed sync, every field that means different things in different systems. This layer build maps, rationalises, and prepares the integration layer before AI deployment begins.

Fixed price £14k–£18k + VAT · 6–10 weeks

What the work covers

  • Full SaaS stack audit and dependency map — every platform in operational use catalogued, with data flows, integration status, ownership, contract terms, and renewal dates documented in a single reference document.
  • Integration risk identification and classification — each integration assessed for stability, data consistency, latency, and single-point-of-failure risk. Priority risk register produced and owned.
  • API readiness assessment for AI connectivity — evaluating whether your platforms expose the APIs and data access patterns that your planned AI use cases require. Gaps documented with vendor escalation guidance where needed.
  • Rationalisation recommendation — independent analysis of which platforms in your stack serve overlapping functions, where consolidation would reduce integration complexity, and what the sequencing of any rationalisation should look like.
  • Integration architecture specification for AI — a documented target state describing how your systems need to share data for your priority AI use cases to function reliably. Produced as a specification your technical team or implementation partner can build to.

Who needs to be involved

Client-side stakeholders we will need to work with
IT lead / CTO Operations director Platform owners (CRM, ERP, finance) Development team (if internal) Procurement (vendor contacts)

What this layer does not include

  • Building integrations or writing integration code
  • Vendor negotiation or contract renegotiation
  • Platform migration or procurement decisions
What you leave with
SaaS Dependency MapComplete catalogue of your stack with integration status, ownership, and risk rating per platform.
Integration Risk RegisterPrioritised list of integration vulnerabilities with specific remediation guidance per item.
AI Architecture SpecificationTarget state integration document specifying what needs to change for your AI use cases to function.
Rationalisation RecommendationIndependent view on where platform consolidation would reduce complexity and cost.
Leadership Readout90-minute session presenting findings, risks, and recommended next steps to your decision-makers.
L3 Readiness SignalClear assessment of whether your integration state supports progressing to workflow and process work.
L3
Layer Three — Workflow & Process

The layer humans and AI share.

You cannot automate a process that has never been written down. And you cannot build an AI assistant on knowledge that has never been captured.

This is the most overlooked layer — and the one that causes the most implementation failures. AI does not remove work. It redistributes it. Without redesigned workflows and a structured change management programme, AI tools get adopted in pockets and never embedded in practice. The technology works. The people don't use it. This layer build fixes that.

Fixed price £16k–£20k + VAT · 8–12 weeks

What the work covers

  • End-to-end workflow mapping for AI-target processes — taking your three to five highest-priority automation candidates from input to output, documenting every step, decision point, exception, and handoff in a format that can be acted on by a human, a tool, or an AI agent.
  • Decision logic extraction — surfacing the unstated rules, judgement calls, and contextual knowledge that experienced staff apply but have never documented. This is often the highest-value work in the layer: the information that currently lives only in people's heads and leaves the business when they do.
  • Human-AI handoff design for each use case — defining precisely where the AI operates autonomously, where human review is required, where escalation occurs, and who is accountable for each output. Not a principles document — a specific decision for each process.
  • Change management programme (8-stage) — a structured approach to embedding AI in operational practice that addresses resistance, builds confidence, and establishes the habits necessary for adoption to stick rather than fade after the first few weeks.
  • Role and responsibility redesign — where the introduction of AI changes what a role requires day-to-day, we redesign the role description and accountability framework accordingly. Not a redundancy exercise — a re-scoping exercise.
  • Adoption metrics framework and 90-day monitoring plan — defining what successful adoption looks like in measurable terms, with a specific monitoring structure for the first ninety days after AI deployment to catch adoption drift before it becomes entrenchment.

Who needs to be involved

Client-side stakeholders we will need to work with
Operations director / COO Department managers Process owners (named per workflow) HR / people team Senior individual contributors CEO / MD (change endorsement)

Why this layer takes longest

  • Decision logic extraction requires extended structured interviews — the knowledge takes time to surface
  • Change management runs in parallel with workflow mapping and must be paced to the organisation
  • Human-AI handoff design often requires iteration once draft processes are reviewed by operational staff
What you leave with
Process Documentation LibraryEnd-to-end maps for all AI-target workflows, including decision logic, exceptions, and handoffs.
Human-AI Handoff DesignSpecific accountability assignments per process step — who decides, who reviews, who escalates.
Change Management Programme8-stage adoption plan with communications, training, and checkpoint structure built in.
Adoption Metrics FrameworkSpecific, measurable criteria for what successful AI adoption looks like in your operation.
Role Redesign DocumentationUpdated role descriptions and accountability frameworks for affected positions.
L4 Readiness SignalAssessment of whether governance gaps need to be addressed before AI deployment proceeds.
L4
Layer Four — Governance & Risk

The layer leadership is accountable for.

Deploying AI without a governance framework is not a technology risk. It is a liability risk. And unlike most risks, it compounds silently until something goes wrong in public.

Regulatory exposure, data protection failures, model output errors that reach clients, bias baked into automated decisions — every one of these is preventable with the right governance structure in place before deployment begins. This layer build produces that structure: specific, owned, and documented. Not a policy that sits in a folder. A framework that actually governs how AI is used in your business.

Fixed price £14k–£22k + VAT · 6–10 weeks

What the work covers

  • AI governance policy — ownership, accountability, escalation — a written policy governing how AI tools are adopted, assessed, used, and monitored in your organisation. Includes a named accountability structure and clear escalation paths for failures and disputes.
  • UK AI Act and GDPR exposure assessment — mapping your planned AI use cases against current and incoming regulatory obligations. Identifying high-risk use cases that require enhanced oversight and data processing activities requiring review under UK GDPR.
  • Model risk assessment framework — a repeatable process for assessing the risk profile of each AI tool or model before deployment, covering output reliability, data handling, vendor accountability, and update cadence.
  • Audit trail design — defining what needs to be logged, by whom, and for how long, to support both regulatory compliance and internal accountability. Covers AI-generated outputs, human review decisions, and model version changes.
  • Bias, transparency, and explainability framework — establishing what your organisation requires in terms of understanding and being able to explain AI-generated outputs, with specific requirements per use case based on regulatory exposure and client-facing risk.
  • Board-level AI risk reporting structure — defining what AI risk information should be reported to the board, at what frequency, in what format, and with what ownership. Includes a first-edition board AI risk summary completed for your current tool set.

Who needs to be involved

Client-side stakeholders we will need to work with
CEO / MD General counsel / legal DPO (if appointed) Risk / compliance lead IT / technology director Board contact (if applicable)

An important distinction

  • We identify regulatory exposure and design governance frameworks. We do not provide legal advice. For legally binding assessments, you will need your legal counsel to review and ratify our findings.
  • The L4 price range is wider than other layers because governance complexity varies significantly by sector and by the number and nature of AI use cases in scope.
What you leave with
AI Governance PolicyWritten, owned, and operational — not a template. Specific to your business and your tool set.
Regulatory Exposure MapUse-case-by-use-case assessment of UK AI Act and GDPR obligations and risk ratings.
Model Risk Assessment FrameworkRepeatable process for evaluating new AI tools before adoption — operable by your team without us.
Board AI Risk SummaryFirst-edition board reporting document covering your current AI risk exposure and mitigation status.
Audit Trail SpecificationWhat to log, how, and for how long — specific to each AI use case in your operation.
Leadership & Board ReadoutFindings presented to your executive team with time for challenge and next-step discussion.
What we are honest about

These programmes fix readiness gaps. Not everything.

There are four things clients sometimes expect from a Layer Build that we want to be clear about before the engagement begins.

01

We do not build the AI implementation

A Layer Build produces the foundations AI needs and the documentation your implementation partner or internal team needs to build correctly. We do not build the AI system itself, write production code, or manage deployment. When the foundations are ready, that work is yours to commission from whoever is best placed to do it.

02

Layer Builds do not guarantee AI success

They guarantee that the specific gaps identified in your Foundation Review are addressed. AI outcomes depend on many factors — model capability, use case selection, implementation quality, and market conditions among them. What Layer Builds guarantee is that the foundations will no longer be the reason it fails.

03

We will tell you if the build reveals additional gaps

The Foundation Review is a structured assessment, not a forensic audit. Layer Builds sometimes surface gaps that the Review could not see without deeper access. When that happens, we tell you directly and discuss the implications for your roadmap. Scope changes require a new conversation — we do not expand engagements without agreement.

04

The work requires your team's genuine involvement

A Layer Build is not something we do to your business while it carries on around us. Decision logic extraction, process mapping, governance accountability, and change management all require meaningful participation from the people who run your operations. We will tell you in the scoping call exactly what that involvement looks like.

The full programme

All four layers. One coordinated engagement.

For businesses where the Foundation Review identifies gaps across all four layers, the Full Foundation Programme delivers a coordinated engagement — one team, one methodology, one price — over five to seven months. Layers build on each other as they complete, so the work is sequenced correctly and each layer informs the next.

The Full Programme is priced at a meaningful reduction on the total of individual layer commissions. It also includes the Foundation Review itself, so the entire journey from initial diagnostic to all four foundations built is covered in a single engagement with one agreed price before anything begins.

Foundation Review
£7,500
L1 — Data Infrastructure
£12k–£16k
L2 — Systems & Integration
£14k–£18k
L3 — Workflow & Process
£16k–£20k
L4 — Governance & Risk
£14k–£22k
Full Foundation Programme — Fixed Price £45k–£65k + VAT · Agreed in full before any work begins
Includes Foundation Review + all 4 Layer Builds
Duration 5–7 months (sequenced)
Engagement model One team, one methodology
Deliverable ownership Yours entirely
Vendor independence Full — no referral fees
Request a scoping conversation → Start with the Foundation Review Total if commissioned separately: £63,500–£83,500. Programme pricing reflects the efficiency of coordinated delivery.
Common questions

What people ask before committing to a Layer Build.

If your question is not here, ask it during the scoping call. We answer everything directly.

No. You commission only the layers your Foundation Review identifies as needing attention. If your Review finds that Layers 1 and 3 have significant gaps but Layers 2 and 4 are substantially sound, you commission those two. There is no obligation to engage on layers that do not require remediation.
Yes. We do not sell Layer Builds to clients who have not completed a Foundation Review. The Review identifies which layers need attention and in what sequence. Without it, we do not know whether a Layer Build is the right next step or whether your specific gaps fall within its scope.
Then you commission the Layer Builds for those layers only. The Foundation Review roadmap will tell you which layers need attention. If only Layer 1 and Layer 3 require work, you commission those two. There is no obligation to engage on all four.
Sometimes. Whether this is possible depends on the dependency between the layers in question and the capacity of your team to support parallel workstreams. Layer 1 must be substantially complete before Layer 2 is meaningful, and Layer 2 before Layer 3. Where layers are not directly dependent — for example, if your Layer 4 governance work can run in parallel with Layer 1 data remediation — we will discuss this during scoping.
More than the Foundation Review, because the work goes deeper into operations. Expect two to four structured sessions per week with relevant stakeholders during the active phase, plus review and sign-off time on deliverables as they are produced. We agree the specific engagement requirements in the Statement of Work before starting.
We tell you directly. The Foundation Review is a structured assessment — not a forensic deep-dive — so Layer Builds sometimes surface specific gaps that were not visible at that level of access. When that happens, we present the additional findings clearly and discuss the options. We never silently expand scope. Every change requires explicit agreement.
The Layer Build produces the plan, the specifications, and the governance frameworks. Whether you act on them with your own team, with an implementation partner, or through our Retained Advisory service is your decision. We do not build AI systems or manage technical implementations.
Governance complexity varies more by sector and use case than the other layers. A professional services firm with three AI tools faces a meaningfully different regulatory landscape than a financial services business with fifteen use cases spanning regulated activities. The scope is agreed and priced precisely in the Statement of Work. The range reflects the range of situations we encounter — not an open-ended pricing structure.
The Full Programme is priced at a reduction on the total of commissioning each layer separately, reflecting the efficiency of coordinated delivery. The reduction is meaningful — typically £10k–£20k relative to the sum of individual layer fees — because coordinated delivery reduces duplication of stakeholder time, documentation overhead, and context-switching cost for our team. The full price is agreed before any work begins.
Ready to start?

The Foundation Review tells you what to fix. Layer Builds fix it.

If you have already completed a Foundation Review and are ready to discuss the next step, use the Get Started form. If you have not yet done a Review, that is where to begin.

Fixed price agreed before work starts Senior-led throughout All deliverables yours to keep No vendor agenda