Back to blog
Compliance
9 Apr 2026·5 min read

If AI Makes a Mistake, Who's Responsible?

Your AI just sent the wrong email to a client. Changed a deal stage nobody asked it to. Now someone's asking: whose fault is that? The answer is simple. Proving it isn't.

By OneConnecter Team

Your AI just sent the wrong email to a client. The tone was off, the numbers were wrong, and the client is now asking why they received a message nobody in your team wrote.

Someone's asking: whose fault is that?

The answer is simple: you.

Not the AI. AI doesn't have fault. It doesn't have intentions, judgement, or accountability. It ran a task, it executed, and the result landed in someone's inbox.

Every legal framework — EU, UK, GDPR — says the same thing: the business that deploys AI is responsible for what it does. Not the model provider. Not the platform. You.

That part isn't complicated. Everyone agrees on it.

Here's what's complicated:

You're responsible. But you can't prove what happened.

The AI sent the email. You're accountable for it. But right now, can you answer these questions?

What did the AI actually do? Not "it sent an email." The full chain — what it read, what it decided, how it drafted the message, what data it used, what it changed along the way.

You don't know. The steps between "user asked a question" and "email arrived in someone's inbox" are invisible.

Why did it do it? Was it responding to a direct instruction? Was it triggered by an automation? Did it chain from a CRM update to a workflow to an email sequence — three hops, zero visibility?

You don't know. There's no record of the reasoning.

Did anyone approve it? Did a human see the email before it was sent? Did anyone review the content? Was there a checkpoint between "AI decided to send" and "email went out"?

No. Because there was no checkpoint. The AI had the keys and nobody was watching the door.

Can you show the full trail? If the data in that email came from your CRM, can you trace the journey? HubSpot record → AI reads it → AI drafts email → email sends. Can you show that chain to a regulator, a client, or your own management?

You can't. Because nobody logged the journey. The departure and the arrival exist in separate systems. The flight in between is a black hole.

Responsibility without visibility is exposure

This is the gap nobody talks about.

Every AI liability article on the internet tells you who's responsible. Developer, deployer, user — shared responsibility, framework this, regulation that. That's not useful. You already know you're responsible.

The real problem is: you're responsible for something you can't see.

You're accountable for emails you didn't review. For record changes you didn't notice. For data movements you can't trace. The responsibility is clear. The evidence doesn't exist.

That's not a legal problem. That's an operational one.

And when something goes wrong — and it will — the conversation isn't "whose fault is this?" The conversation is:

"Show me what happened."

And you can't.

What happens when you can't show what happened

The client loses trust

"Your AI sent me something wrong. What happened?"

The honest answer: we don't know exactly. That's not an answer a client accepts. It doesn't matter that the AI made the mistake — what matters is that your business can't explain it.

Your team can't fix it

A CRM record changed. A deal stage moved. A contact got re-categorised. Nobody noticed for three weeks. Now a sales report is wrong and nobody can trace it back to the moment the AI made the change — because there's no trace.

You can't fix what you can't see. You can't prevent what you can't reconstruct.

A regulator asks the question you can't answer

When they ask — and they will — the question isn't "who's responsible?" They already know it's you. The question is:

"What controls did you have in place?"

And the answer right now is: none that you can prove.

Flip the question

Stop asking "who's responsible?" You know the answer.

Start asking: "Can I prove what my AI did?"

Because that's the question that actually matters. Not liability theory. Not framework breakdowns. Not "shared responsibility." Just:

  • Can you show what the AI did, step by step?
  • Can you show whether a human approved it?
  • Can you show where the data came from and where it went?

If yes — a mistake is a documented incident. Fixable. Explainable. Defensible.

If no — a mistake is exposure. And the next question is one you don't want to answer.

How to close the gap

Approve before it happens

Every write action — sending emails, updating records, posting messages, moving data — should require your explicit YES before the AI executes. The AI prepares the action. Shows you exactly what it's about to do. You decide.

No more autonomous emails. No more "I thought you wanted me to send it."

See the full chain

Not just the result — the journey. What was asked → what the AI decided → which tools it used → what data it touched → whether a human approved → what happened.

Every step. Every time. Automatically.

Trace across platforms

When data moves from HubSpot to Google Sheets to Slack — the audit trail should link every hop. Not three separate logs in three separate systems. One chain, source to destination, with a direct link between them.

Keep the receipts

Every action, every approval, every chain — recorded and searchable. When someone says "show me what happened" — one click, full answer. Not next week. Not after an investigation. Now.

What this looks like with OneConnecter

Chain: 7f4a91bc...
  Step 1 [read]        User asked: "email the client the updated proposal"
  Step 2 [read]        AI drafted email to client@company.com
  Step 3 [destination] gmail-send-email → pending approval
  Step 4 [destination] gmail-send-email → approved by Sarah → sent

If that email turns out to be wrong — you can show exactly what happened, who approved it, and what data the AI used.

And when someone asks "who's responsible?" — you have the only answer that matters:

"We are. And here's everything we did to maintain oversight."

You're responsible. Now prove it.

Start free with OneConnecter — approval gates, full audit trail, cross-platform chain of custody. Nothing happens without your YES.


OneConnecter is an AI orchestration platform for SMEs. One login, every tool, full audit trail. Nothing happens without your YES.

See OneConnecter in action

AI governance, full data provenance, EU AI Act compliance — one platform, no code required.