How AI will trip you up, if you let it

AI is having a moment.

In business and at work, it can be an incredible sidekick. The kind that drafts faster than you can blink, summarizes meetings you barely remember attending, and turns “ugh, not this again” tasks into “done before lunch.”

But there’s a catch.

AI is also that cocky colleague who will walk into the wrong meeting room, shake hands with the wrong client, and still insist it nailed the brief.

And if you don’t build a little caution into how you use it, it can blow up in your face.


A cautionary tale

Even the experts aren’t immune. In 2025, Deloitte agreed to repay part of a fee to the Australian government after errors (including fabricated references) were found in a report where generative AI was used in drafting.

That’s not a “don’t use AI” story. That’s a “use AI carefully” story.

In this article we’ll cover the four main ways AI will trip you up if you let it — and what to do instead.  

First, a quick mindset shift: AI is a draft engine, not a truth engine

Generative AI is optimized for plausible language, not verified reality. It’s great at sounding right, and sometimes that’s exactly the problem.

So if you take one thing from this post, make it this:

AI can accelerate your work, but it can’t replace your judgment

When the stakes are low (brainstorming, first drafts, internal notes), you can move fast.

When the stakes are high (client-facing, regulated, contractual, brand-sensitive), you need a process that assumes AI might be wrong  and catches it before it escapes into the wild.

The 4 things to be cautious of when using AI (and how to stay out of trouble)

1) Hallucinations (a.k.a. confidently wrong answers)

AI can generate outputs that feel credible but are factually incorrect, internally inconsistent, or based on fabricated sources. Sometimes it fills gaps. Sometimes it guesses. Sometimes it invents.

This is not a rare anomaly; it’s a known limitation of how these models work.

And it can get expensive, fast. The Deloitte incident is a high-profile example: the report in question included incorrect references and citations, and the situation led to a partial repayment and substantial reputational impact.

Implication: Treat AI output as a first draft, not a source of truth.

Practical guardrails that actually work:

  • Verification rule: If it will be seen by a client, regulator, media, or executive… verify it.

  • Source discipline: Ask AI to include sources, but assume sources may be wrong and check them anyway.

  • Spot-checking: Validate the most important claims (numbers, definitions, legal/regulatory statements, quotations, attributions).

  • Make it show its work: “List assumptions,” “explain reasoning,” “identify uncertainties,” “what would change your answer?”

2) Bias in outputs (quietly steering you off course)

AI reflects patterns in its training data, which includes the biases baked into that data (e.g. stereotypes and cultural bias).

Bias doesn’t always show up as an obviously problematic statement. More often, it shows up as:

  • the examples it chooses

  • the defaults it assumes

  • the perspectives it privileges

When AI is prompted to generate an image of a female medical professional, the resulting image is ofen ambiguously either a doctor or a nurse

When AI is prompted to produce an image of a male medical professional, the resulting image is invariably a doctor, not a nurse

In other words: AI can subtly reinforce cultural bias and stereotypes while sounding neutral.

Implication: AI can amplify existing assumptions unless humans actively interrogate the output.

Practical guardrails:

  • Perspective prompting: “What’s the counterargument?” “Who might disagree?” “What would a frontline team say?”

  • Diversity checks: “How might this land across different regions/roles/cultures?”

  • Decision hygiene: Don’t accept the first answer. Ask for 2–3 alternative approaches and compare.

A simple habit: when you see an answer that seems to make assumptions or generalizations, ask, who “would agree, or perhaps disagree, with this perspective?”

3) Data privacy and confidentiality (the silent risk)

This is the one that bites teams who are otherwise doing “everything right.”

Many free or consumer AI tools may retain inputs and, depending on the service, may use them to improve models. Enterprise platforms typically provide stronger controls and contractual assurances around data handling (data isolation, retention policies, compliance commitments).

Implication: Never input confidential, client, or regulated data into unsecured tools.

Practical guardrails:

  • Tool tiering:

    • Public AI tools → only public, non-sensitive content

    • Approved enterprise AI → real work, under policy and governance

  • Redaction habit: If you must use text, strip names, identifiers, amounts, addresses, and unique context.

  • “Would I forward this?” test: If you wouldn’t email it externally, don’t paste it into an unapproved AI tool.

This isn’t about paranoia. It’s about professionalism, confidentiality and compliance.

4) Accountability stays with the human (always)

AI doesn’t take responsibility. It doesn’t sign the contract. It doesn’t get called into the compliance review. It doesn’t show up when the client asks, “Where did this come from?”

You do.

Even in the Deloitte case, the core issue wasn’t that “AI was at fault” The issue was that AI’s errors made it into something official, client-facing, and paid for, where the standard is (rightly) higher.

Implication: AI should be treated as an assistant, not an authority.

Practical guardrails:

  • Named owner: Every AI-assisted output has a human owner responsible for accuracy.

  • Review gates: Add a lightweight checklist before anything goes external.

  • Audit trail: Save prompts, drafts, and references for high-stakes work.

A useful mantra: AI can help you write it. Only you can stand behind it.

AI can help you write it. Only you can stand behind it.

A simple “don’t get burned” workflow (use this tomorrow)

Here’s a lightweight process your team can adopt without creating bureaucracy:

  1. Define the stakes

    Internal draft? Low risk.

    Client-facing / regulated / brand-sensitive? High risk.

  2. Constrain the input

    Use approved tools.

    Remove confidential identifiers.

    Provide context, definitions, and success criteria.

  3. Generate the draft

    Ask for assumptions and uncertainties.

    Ask for alternatives (not just “the answer”).

    Ask for credible sources.

  4. Verify what matters

    Facts, numbers, references, quotes, and claims.

    Anything that could cause reputational, legal, or financial harm.

  5. Human sign-off

    One accountable person approves the final.

That’s it. No drama. No committees. Just a professional standard.

In Short…

AI can dramatically accelerate thinking and execution — but only when paired with human judgment, verification, and responsibility.

Use it like a power tool:

  • incredibly useful

  • slightly dangerous

  • best handled with training, guardrails, and respect for what it can’t do

If you let AI run unattended, it will eventually trip you up. If you use it with care, it becomes a genuine advantage.

Want help putting these guardrails into practice?

If you want to roll this out across your team (without turning it into a compliance headache), that’s exactly what I help businesses do: make AI practical, safe, and measurably useful.

Book a consult, and we’ll map:

  • where AI can save your team time immediately

  • what “approved use” should look like in your context

  • a simple adoption plan your people will actually follow

Because the goal isn’t “to use AI.”  The goal is get results, without preventable surprises.

Next
Next

How AI Can Make a 100% Difference in Your Business (Without Blowing Everything Up)