Last night, our founder and CEO, Calvin Cheng, joined Preface’s “AI in Governance” series to explore a question facing every boardroom today: how do we move from AI ambition to genuine AI readiness?

The session, hosted by Tommy, Founder and CEO of Preface, and expertly moderated by Sean, brought together over 30 executives from more than 10 industries. Most participants are already involved in AI-related proofs of concept (POCs), making the discussion highly grounded in real-world implementation rather than theory.
Below are highlights from the conversation and their implications for boards, governance professionals, and enterprises experimenting with AI.
1. The 80/20 Gap: “Doing AI” Without the Seatbelt
One theme resonated strongly: most organisations are now “doing AI”, but far fewer have put in place the equivalent of a seatbelt.
Across markets, a large majority of companies reference AI in their strategy, annual reports, or ESG disclosures. Yet only a minority have structured AI governance in place – with clear oversight roles, policies, and lifecycle controls that cover how AI is selected, hosted, and applied in practice.
Calvin framed this as the difference between AI ambition and AI readiness:
- AI ambition is about pilots, vendor demos, and innovation narratives.
- AI readiness is about knowing where AI is used today, who is accountable for each use case, and how failures are detected, escalated, and remediated.
For boards and INEDs, the key risk is not only misuse of AI, but also the blind spot of not knowing where AI is quietly shaping decisions in reporting, risk, and customer journeys.
2. AI Governance Is Bigger Than the Model
Another key takeaway was that AI governance is not a “model problem” – it is an organisational problem.
The discussion highlighted several dimensions that sit beyond algorithms themselves:
- Strategy and risk appetite
Boards need clarity on why the organisation is adopting AI, where it expects AI to create value, and how much risk it is prepared to take in high‑impact use cases. - Data and infrastructure foundations
Poor data quality, weak lineage, and fragmented infrastructure can hard‑wire bias and fragility into AI systems long before they reach production. - Lifecycle governance
Risks manifest differently when AI is:- Selected (choice of models, data, objectives)
- Hosted (security, access control, monitoring, vendor risk)
- Applied (embedding outputs into workflows, approvals, and customer decisions)
- People, roles, and culture
Effective AI governance requires cross‑functional collaboration between technology, risk, legal, compliance, and business teams, supported by clear ownership and training at all levels.
Tommy shared observations from corporate transformation projects: shadow AI use, policies that exist only on paper, and governance teams being brought in late – often at deployment rather than design. These patterns underline why AI governance must be treated as part of overall transformation, not as a standalone technology exercise.
3. What “Good” Looks Like for Boards
The panel also explored what “good” AI governance looks like in practical board terms. Rather than focusing on abstract principles alone, the conversation emphasised concrete structures and artefacts:
- Board oversight and structure
AI should appear explicitly in board and committee agendas, with clear delegation to risk, technology, or ethics committees and defined escalation paths for high‑risk use cases. - Formal AI policy and principles
A codified AI policy, aligned with recognised principles such as fairness, reliability, privacy, security, inclusiveness, transparency, and accountability, should guide both internal development and third‑party tools. - Documentation and inventories
Organisations should maintain a central inventory of AI systems and supporting documentation, including model cards, data documentation, design history, and risk assessments. - Continuous review
AI policies and controls need to be treated as living frameworks, revisited periodically as regulations, technologies, and use cases evolve.
This is where governance professionals can play a pivotal role: translating high‑level board expectations into practical policies, workflows, and artefacts that can be tested, audited, and refined over time.
4. Three Uncomfortable Questions Every Board Should Ask
Perhaps the most memorable part of the evening was the focus on “uncomfortable questions” – the questions that boards should be asking management if they wish to avoid becoming a future AI governance cautionary tale.
Three questions stood out:
- “Can you show us a complete and current inventory of where AI is used in our critical decisions – and who owns each use case?”
If this cannot be answered today, there is a visibility and accountability gap that needs to be addressed urgently. - “If an AI system fails publicly tomorrow – through bias, hallucination, or a data incident – what exactly happens in the first 24–72 hours?”
This probes the existence and realism of AI-specific incident playbooks, including logging, escalation, communication, and remediation. - “How do we know that AI is improving our risk and compliance posture rather than quietly undermining it?”
This calls for evidence: metrics on model performance, error rates, escalations, audit findings, and whether AI has been integrated into risk registers, internal controls, and assurance plans.
These questions are intentionally uncomfortable because they reveal whether AI governance is embedded or only aspirational.

5. How Wizpresso Fits In
At Wizpresso, we see a clear gap between high‑level AI principles and day‑to‑day governance. Our AI compliance platform is built to bridge exactly this gap by transforming international AI standards into practical, auditable workflows for boards and compliance teams.
Instead of leaving frameworks like ISO/IEC 42001, NIST AI RMF, and emerging AI regulations as static PDFs, our platform converts them into actionable checklists that can be assigned, tracked, and evidenced across the organisation. This allows governance, risk, and compliance teams to:
- Map requirements to concrete controls and owners.
- Break down complex standards into clear tasks and milestones.
- Monitor progress against each requirement in real time.
Critically, we help organisations track compliance across three layers:
- Policy – whether appropriate AI and data policies exist, are approved, and are aligned with recognised standards.
- Procedure – whether there are documented processes and controls that operationalise those policies across the AI lifecycle.
- Evidence – whether there is verifiable documentation, logs, and artefacts (e.g. model cards, risk assessments, incident records) that can be produced to boards, auditors, and regulators when required.
By unifying these elements into a single platform, Wizpresso enables organisations to move from ad hoc AI pilots to governed scaling – giving boards a clear line of sight from international AI standards, to internal policies, to on‑the‑ground evidence that those standards are actually being met.
We are grateful to Tommy and the Preface team for hosting, to Sean for guiding a sharp and candid discussion, and to all the executives who shared their experiences and challenges openly. The level of engagement confirms one thing: AI governance has firmly arrived in the boardroom – and the organisations that treat it as a strategic capability, not just a compliance task, will be the ones that lead.
If your board or leadership team is exploring how to move from AI pilots to governed scaling, we would be delighted to continue the conversation.
Contact Us: https://wizpresso.com/Contact