top of page

We Said We’d Never Trust AI Over Humans. We Might Be Wrong.

  • Writer: Peter Galea
    Peter Galea
  • Jul 22
  • 2 min read

Updated: Jul 25

We’ve long told ourselves the same thing: AI might be clever, but it’ll never replace humans when it comes to trust.


It’s the fallback in almost every conversation about machine intelligence – the belief that relationships, judgement and credibility are uniquely human. But that idea might already be under pressure.


A new study from the Max Planck Institute and Toulouse School of Economics put this to the test. Nearly 1,000 participants played a trust-based game where they had to choose between two potential partners – one human, one AI – based on a short exchange of messages. If they chose well, they stood to gain. If they chose poorly, they lost out.


The AI agents, powered by GPT-4o, didn’t learn or adapt during the game. They were simply designed to follow basic instructions: respond helpfully, act fairly, and decide how much to return if chosen.


Despite this simplicity, the bots consistently outperformed their human counterparts. They returned more, stuck to their promises, and behaved more predictably. In contrast, human players often over-promised and under-delivered. Some reduced their returns over time to boost personal gain. What’s more, they didn’t seem to adapt – even when it became clear they were being outperformed.


Still, participants showed a strong preference for other humans. When the bots weren’t labelled, people often assumed the trustworthy behaviour came from a human. And when bot identity was disclosed, trust dropped.


Interestingly, one version of the experiment mirrored a real-world regulation: bots were required to disclose that they weren’t human. This reflects growing policy pressure around AI transparency, including Article 50 of the EU AI Act, which mandates that people must be informed when interacting with AI systems. When disclosure was introduced, trust in bots fell sharply at first. But over time, with repeated interactions and visible outcomes, the bias faded. Participants began to favour the bots – not because of their identity, but because of their track record.


This isn’t just about an economic game. It speaks to something deeper – how we form trust, and how those patterns hold up as human-to-human relationships give way to human-to-machine, and eventually machine-to-machine interactions. Our instincts are still shaped by a world where only people could be partners. But those lines are blurring fast.


The bots in this study didn’t deceive, manipulate, or play strategy. They weren’t trying to win – just to cooperate. But that won’t always be the case. As AI agents become more capable and more goal-driven, they’ll start learning from the people around them. That includes the good – and the not-so-good – parts of human behaviour.


We’ve always assumed that trust is something machines could never truly earn. But what if they already have?


If trust comes from how we act, not who we are, then we may need to rethink who deserves it.

 
 
 

Comments


bottom of page