What About Trust?

AI has become ever-present in media, both social and traditional, and the subject often falls into one of a few camps: how to implement it, how much productivity it adds (or doesn’t), never-ending hype cycles, and how to secure it, all useful. But the question your employees, partners and customers are really asking is much simpler: can we trust you with AI?

Cyber security taught us to prove controls; AI asks us to earn trust by building confidence in how we use it, what we disclose, and how people can challenge it. People are at the heart of the conversation.

Assurance vs Trust

Let’s take a step back for a second. Governance for cyber security came about due to the need for people, businesses and organisations to show they were doing the ‘right’ thing, ensuring they made decisions on security practices and implementation. It’s essentially an assurance job: show evidence that a control meets a requirement. It’s often “A proves B secures C”. Great for data protection and encryption for example, less great for socio-technical questions like “Is AI being used fairly?”.

Example: A padlock in your browser (A) proves your banking session is encrypted (B), so you can safely access your money (C).

Stepping back into ‘now’. AI is a revolution in technology, society, the workplace and personal lives. I can’t think of an area where AI won’t be prevalent - this we already know. The language on how people discuss AI feels like a different paradigm: trust.

Trust is becoming, in my view, the lens to view AI; Deloitte (link below) found recently that only 50% of women in Australia trust or use generative AI in their workplace. There can be many reasons for this, but a clear driver is the impact of negative generative-AI effects, such as deepfakes.

We’ve all seen commentary on the impact of AI on society, from employees asking if AI is going to take their job, to customers questioning the implementation of AI within products (who told Meta everyone wanted their chatbot within WhatsApp?); even business partners want to know how you’re using AI on their data.

In my opinion, this all comes back to trust: how can people trust you with how you use AI?

Assurance v Trust Breakdown

Assurance = proof of controls (auditor-facing). Trust = visible practices people can see and challenge (user-facing)

Dimension Assurance (cyber security-style) Trust (AI governance-style)
Core goal Prove controls meet requirements Earn confidence you act responsibly and visibly
Evidence style Audits, certifications, pen tests Plain-English disclosures, verifiable practices, human oversight
Primary audience Regulators, auditors, security teams Employees, customers, partners, the public
Time horizon Periodic (annual/quarterly) Continuous and adaptive
Strength Clear pass/fail, strong for technical controls Builds social licence, handles ambiguity and context
Limitation Slow, heavy, often after-the-fact Can be fuzzy if not backed by simple evidence
Example ("A proves B secures C") A: TLS padlock → B: encrypted session → C: safe banking A: "AI-assisted" label → B: content reviewed by a named human role → C: reader can trust and contest the output

The table highlights key differences. Cyber security assurance is periodic and audit-led, and that will also be true for organisations implementing ISO/IEC 42001 (the AI management system standard). By contrast, trust-led governance can be innovative, proportionate and continuously adaptive without being heavy.

Why SMEs can’t wait for an assurance regime

People want to know how you use AI today. Formal assurance will arrive, but it’s slow and often too heavy for SMEs. SMEs need plain-language trust practices now: simple disclosures, a visible human fallback, and a lightweight review rhythm.

A quick example from my work at Naughty Naughty (an AI-first apparel company): a new partner arrived wary. We showed where we use AI, how we govern its use, and the human checks we run. That transparency reduced fear and built trust.

Because AI is opaque, it’s hard to obtain concrete assurance, but you can build trust. Trust is intangible, yet you can show that you’re aware of the risks, work to mitigate them, and make a committed effort to build trust in your products, services, leadership and relationships. Small steps in building trust can have an outsized effect on how customers, employees and partners view your business.

You can’t guarantee AI outcomes, but you can show how you govern them. Start small and scale.

Three simple things leaders can do this week:

  1. Decide and communicate if staff can use AI (and where).

    • Why: Removes uncertainty and reduces shadow use.

    • Do: Send a short memo with five rules. Start small and scale Example line: “Customer data must not be pasted into public AI tools.”

    • Result: Fewer surprises and lower data-leak risk.

  2. Map current AI use, even unofficial.

    • Why: You can’t trust what you don’t know.

    • Do: Make a simple list: what tool and what it’s used for.

    • Result: Shared source of truth, fewer surprises, and a foundation for future assurance.

  3. Label AI-assisted outputs and name the human role.

    • Why: Transparency builds confidence and invites feedback.

    • Do: Add a footer like “AI-assisted. Reviewed by [Team/Role]. Contact: trust@company.com”.

    • Result: Practical trust signals your users and partners can act on.

Governance is coming for AI. Start before you’re told to start. Show your work, keep it human, and update as you learn. Underfold is here for your journey.

Links:

Previous
Previous

Workslop, Junk Learning, and the Future of Teams