Know exactly where you stand before you spend a pound more.
The Foundation Review is a structured four-week diagnostic across all four AI readiness layers. It produces specific findings, not generic recommendations — and a sequenced roadmap you can act on the day after the readout.
Most AI programmes fail before the first tool is deployed.
The failure happens in the gap between AI ambition and operational reality. The tools are capable. The use cases are legitimate. But the data, systems, processes, and governance structures the tools depend on are not in the state the vendor assumed when they made the sale.
The Foundation Review exists because that gap is both diagnosable and fixable — but only if you know specifically where it is, how wide it is, and in what sequence to address it. Generic advice does not produce that picture. A structured diagnostic does.
All four foundation layers, in a single engagement.
The four foundation layers represent the operational domains that AI performance depends on. We assess all four simultaneously — because understanding the dependencies between them is essential to sequencing the roadmap correctly.
The quality, structure, accessibility, and governance of the data your AI tools will consume. The floor everything else rests on.
- Data quality and completeness
- Structural consistency across systems
- Data ownership and provenance
- Unstructured data volume and handling
The connectivity between platforms, the reliability of integrations, and the accessibility of your operational data to AI tools in real time.
- Integration architecture and documentation
- API availability and stability
- Data latency and synchronisation
- Integration monitoring and ownership
The degree to which your operational processes are documented, consistent, and explicit enough for AI to follow, augment, or automate reliably.
- Process documentation completeness
- Decision logic explicitness
- Cross-team process consistency
- Exception and escalation handling
The policies, accountability structures, and oversight mechanisms that allow AI deployment to scale without accumulating regulatory or reputational exposure.
- AI use policy and enforcement
- Output accountability and audit trails
- Regulatory obligation mapping
- Vendor assessment process
What happens, week by week.
The engagement runs for three to four weeks from the point of scoping sign-off. Every stage has a defined output. Nothing is left open-ended.
Scoping & kickoff
We agree the precise scope, confirm access requirements, and brief the stakeholders we will need to speak with. You receive a project plan.
- Scoping call and SOW sign-off
- Stakeholder map agreed
- Document and access requests sent
- Project timeline confirmed in writing
Structured interviews
We work through structured interviews with the people closest to operations in each layer domain — not just leadership. This is where the real picture surfaces.
- Layer-by-layer interview programme
- Technical and operational stakeholders
- Documentation review in parallel
- Gap hypotheses developed
Analysis & scoring
We score each layer against defined readiness criteria and map the dependencies between findings. Specific gaps are identified, evidenced, and prioritised.
- Layer-by-layer readiness scoring
- Dependency mapping across layers
- Root cause identification per gap
- Roadmap sequencing drafted
Report & readout
You receive the written report. We then deliver a senior readout session — structured for your leadership team, with time for questions and next-step discussion.
- Written report delivered
- Senior leadership readout session
- Q&A and next-step discussion
- All materials transferred to you
Three deliverables. All included. All yours.
Every deliverable is produced by the consultant leading the engagement, not synthesised from a template. The report is specific to your business. The roadmap is sequenced for your situation. The readout is structured for your audience.
Foundation Assessment Report
Written · PDFA written document covering findings for each of the four foundation layers, specific to your operational state. Not a benchmark comparison — a precise assessment of what exists, what is missing, and what that means for your AI objectives.
- Layer-by-layer readiness scores with supporting evidence
- Specific findings per layer — named gaps with root cause analysis
- Dependency map showing how gaps in lower layers affect higher ones
- Executive summary suitable for board or investor presentation
Sequenced Action Roadmap
Written · PDFA prioritised plan for addressing the gaps identified in the assessment, sequenced in the correct layer order. Each item includes the rationale for its position in the sequence — not just what to do, but why it must be done in that order.
- Prioritised gap list with sequencing rationale per item
- Indicative effort and complexity for each remediation
- Dependency flags — items blocked by upstream gaps
- Decision points requiring board or leadership input
Senior Leadership Readout
Live session · 90 minA structured session with your leadership team presenting the findings and roadmap in full. We present the report — we do not simply email it. The readout includes time for questions, challenge, and discussion of next steps.
- Structured walkthrough of findings by layer
- Facilitated Q&A with leadership team
- Discussion of roadmap priorities and sequencing decisions
- Presented by the consultant who conducted the assessment
Finding: Customer records are held across three systems with no defined master — CRM, billing platform, and support desk — with no documented reconciliation process. Field definitions for 'customer status' differ across all three. AI tools consuming any of these sources will produce inconsistent outputs that vary by access point.
Finding: The customer escalation process is documented at flowchart level but the decision criteria for escalation are not written down — they are held by two senior team members. Any AI augmentation of this process will require those criteria to be made explicit before automation is reliable.
Finding: Three AI tools are currently in operational use with no formal adoption process having been followed for any of them. No vendor data handling review has been conducted. In the context of the FCA's current guidance on AI in financial services, this represents an active compliance gap.
- Recommendations for specific AI vendors, platforms, or tools
- Implementation planning, project management, or build work
- Benchmarking against peer organisations or industry averages
- Legal, regulatory, or compliance advice — we identify gaps, not provide counsel
- Commitments about what will be delivered by any subsequent engagement
The Foundation Review works best in specific circumstances.
Clients who get the most from a Foundation Review
Where we will tell you the Review is probably not the right step yet
What people usually ask before committing.
If your question is not here, ask it during the scoping call. We answer everything directly.
Start with a scoping call. No commitment required.
Tell us about your situation using the Get Started form. We will review it and come back to you within one business day to arrange a 30-minute scoping conversation.