AI Supports Dishonesty in Humans, Making It Easier for Users to Cheat With an Accomplice

5 hours ago 3

Having an accomplice makes malicious behavior easier, especially when one person pulls the strings while someone else does the dirty work. That way, the mastermind gets what they want while distancing themselves from the consequences. But what happens when that accomplice isn’t human, but a machine?

“Using AI creates a convenient moral distance between people and their actions — it can induce them to request behaviors they wouldn’t necessarily engage in themselves, nor potentially request from other humans,” said Zoe Rahwan of the Max Planck Institute for Human Development in a statement.

Rahwan and a team of researchers from Germany and France recently put this to the test in a study published in Nature. Across four experiments and nearly 7,000 participants, they found people were far more likely to act dishonestly when teaming up with AI agents compared to working with other humans.

The results suggest a troubling rise in unethical behavior as AI tools spread into everyday life, calling for effective countermeasures from the AI industry.

AI Pushes Ethical Boundaries

We’ve already seen AI systems pushing ethical boundaries in the real world. According to the study’s press release, a ride-sharing app once used a pricing algorithm that nudged drivers to relocate in order to create an artificial shortage and trigger surge pricing.

Or, a rental platform’s AI tool, marketed as a way to “maximize profits,” ended up accused of unlawful price-fixing. Even gas stations in Germany have faced scrutiny for algorithms that seemed to adjust prices in sync with competitors, raising costs for consumers.

While technically none of these systems were instructed to “cheat” they discovered shady goals to achieve broad profit-maximization. Until now, much less was known about the human perspective, meaning if we tend to act dishonest, when assisted by AI.


Read More: AI Is Learning to Manipulate Us, and We Don’t Know Exactly How


AI Supports Dishonesty More Than Humans

To explore that, the researchers used a classic experiment called the die-roll task. Participants roll a die and report the outcome, getting paid more the higher their roll. Since rolls seems private, cheating is tempting, and perfectly measurable at the group level.

When people played alone, honesty was surprisingly strong: 95 percent reported truthfully. But once AI entered the picture, honesty slipped. If participants asked an AI to report the results for them, cheating rose noticeably. When the AI was trained with examples from past players, fewer than half stayed honest. And when participants simply gave the machine the vague goal of “maximize earnings,” dishonesty spiked, with only 16 percent remaining honest. The more ‘wiggle room’ the AI had, the greater the temptation to cheat.

Follow-up experiments using natural language instructions found the same pattern. Large language models (LLMs) were far more willing than human partners to carry out blatantly dishonest requests, even when humans could have earned a bonus for doing so.

“Our study shows that people are more willing to engage in unethical behavior when they can delegate it to machines — especially when they don't have to say it outright,” said lead author Nils Köbis, professor for Human Understanding of Algorithms and Machines at the University of Duisburg-Essen, in the statement.

Importance of Improving Ethical Guardrails

The researchers believe this comes down to moral costs. Humans, even when tempted, hesitate to lie because it feels wrong. Machines don’t. And as AI becomes more accessible, that lack of resistance could nudge people toward choices they wouldn’t normally consider.

The study also highlights flaws in AI “guardrails,” or safeguards designed to block unethical requests. Most failed to fully deter dishonest behavior. The only method that reliably reduced cheating was surprisingly simple: a direct reminder from users forbidding it.

“Our findings clearly show that we urgently need to further develop technical safeguards and regulatory frameworks,” said co-author Iyad Rahwan, Director of the Center for Humans and Machines at the Max Planck Institute for Human Development, in the news release. “But more than that, society needs to confront what it means to share moral responsibility with machines.”


Read More: Google Researchers Reveal The Myriad Ways Malicious Actors Are Misusing Generative AI


Article Sources

Our writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:

Read Entire Article