little kid playing with a toy airplane on the tarmac. small real plane in the background
Editorial

In AI We Trust. Or Not. The Next Frontier of Work.

4 minute read
Andrew Pope avatar
By
SAVED
Don't trust the AI — trust the systems you build around it. Like pilots and doctors, confidence comes from training, checklists and knowing when to stop.

“Trust me, I’m AI.” We’ve all seen how AI can be confidently wrong, fill gaps with plausible nonsense and how it will never take responsibility (no matter how politely it apologizes). Sometimes the mistakes are obvious. Sometimes they aren’t. Authoritative responses don’t mean they’re right, which makes AI hard to trust.

This lack of trust slows down AI adoption as we refine our prompts, double-check sources or find out the hard way, when an incorrect output ends up with a client. It’s also frustrating when we berate an inanimate object for getting it wrong (again).

Where Does Trust Begin?

Trust is incredibly powerful. It gives us confidence under uncertainty. As Google’s Project Aristotle found, teams with high levels of trust perform better.

With humans, trust is usually earned the old-fashioned way: time, consistency and watching a person’s response when things go wrong. Whether it’s someone being reliable, being honest, owning a mistake or simply doing what’s expected, trust is the glue behind most of our professional and personal relationships.

AI, meanwhile, skips the awkward small talk and arrives on day one sounding confident, long before it’s earned that credibility.

We are put into situations daily where we need to trust people that we don’t know. The new doctor at our local practice we trust with our health, the airline pilot we trust with our safety, the first-year teacher we trust with our children’s education. We don’t have time to get to know the person — the pilot isn’t going to walk down the aisle of the aircraft and chat with every passenger — yet we trust them with the most important things.

Trust, in these circumstances, comes not from the people, but from the systems we’ve built around them. The training, the checklists, the co-pilot, the protocols for when things go wrong. Robust systems — consistently deployed — give us confidence in the outcome. We trust these people conditionally based on the governance around them.

We apply the same logic to colleagues at work. Sure, we trust people more when we get to know them over time, who have proven themselves reliable and capable. However, what about new hires? To instantly distrust and avoid them would be completely pointless, not to mention rather hostile. Much like the airline pilot, trust stems from having systems around hiring and onboarding, as well as governance and accountability. The better the systems, the more reliable the outcomes.

Conversely, poor recruitment and onboarding, combined with a lack of accountability, will mean either more time spent building trust or having the wrong person entirely in the team.

Building AI Trust Systems

Let’s extend the logic to AI: don't trust the AI, trust the system we build around it.

Without appropriate governance systems, we risk either blindly trusting AI, treating it like a knowledgeable colleague we know well; or dismissing it entirely. Both are mistakes. It's essential to treat AI like we treat the professionals we don't know.

For many, we place too much faith in the technology itself. Encouraging our employees to try it out by themselves or sharing ‘prompts of the week’ that have resulted in limited success. This approach is as flawed as trusting the crowdsourcing of sensible names for the new intranet: without guardrails, we’re just going to end up with 203 entries for “Intranetty McIntranet Face.” From suffering cognitive impacts such as “AI Brain Fry” to accepting hallucinations without critical analysis, simply leaving people to the tools is fraught with risk.

Systems that reduce the risk of AI errors and protect our wellbeing ultimately result in us placing more trust in the outputs. Such systems don’t have to be comprehensive — and indeed they shouldn’t be. If every prompt requires a 45-page legal disclaimer and risk assessment, we won’t go anywhere near AI. What they should do is help our people build confidence in how they use the tools — and crucially know when to stop.

Such systems that help us trust AI include:

  • Training people on how to build and prompt agents well. Providing the skills on how to use the technology as well as building awareness of where things can go wrong.
  • Using sources we can verify. This could include having certain libraries or individual content items formally verified by the organization.
  • Building in human intervention points, such as before the AI creates an irreversible action or where confidence thresholds are not met. Scoring systems can be used to cover a range of scenarios so that employees know when and how to intervene. For example, ‘Go’ for low-risk scenarios (such as meeting notes) allowing normal judgement, ‘Slow’ for moderate risk or unfamiliar context requiring review, or ‘Stop’ for high risk or low confidence where a formal review will be required.
  • Providing checklists for when to stop and escalate, such as asking people to check if the output is potentially dangerous, uses personal data, is irreversible, distressing to others or if they are uncertain the output is correct.
  • Knowing who to go to when things don't seem right. Whether in combination with a scoring system, or as part of an escalation process, having resources with expertise that employees can turn to will be an essential role as AI becomes more ubiquitous.
  • AI applying a risk classification to its own response. This will typically require the employee to instruct the model each time, prompting it to apply a risk classification to every response, such as ‘high.’ While not foolproof, it is a quick way to get AI to assess itself.
  • Critical thinking training. Probably the most important aspect is showing our employees how to critically evaluate AI responses themselves, giving them confidence in their own abilities — knowing whether the response is trustworthy or not.

AI Is Never the Expert

Don’t be fooled by the authoritative responses, the apologies when you point out mistakes, the language used. Focus on how we can manage the unknowns, how we can mitigate AI’s ability to fill in the gaps with hallucinations when it doesn’t know the answer.

It’s the systems we build. The backups, the knowledge, the people who step in to help. That’s the more important foundation than the tool itself. After all, an airline pilot without training is just every toddler running around with their arms outstretched yelling “zoooooom!” Enthusiastic, but not an expert. Just like our friendly AI.

Learning Opportunities

Editor's Note: What else can make or break the outcomes of your AI initiative?

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Andrew Pope
Andrew looks at workplace technology through the eyes of the workforce, as owner of Designing Collaboration. He helps his clients become more clear and confident in choosing how and why to use digital workplace tools, to overcome a lack of alignment in digital and working practices, improves poor habits such as over-reliance on email and terrible meetings and helps to improve digital health and culture, such as "always on."

He coaches practical technical and soft skills to lead and empower teams in digital workplaces and develops strategies to leverage collaboration technology to meet organizational, team and individual needs — whether specific goals, increased productivity or improved wellbeing.
Connect with Andrew Pope:

Main image: Amber Faust | unsplash
Featured Research