Posted on: October 23, 2025, 05:54h.
Last updated on: October 23, 2025, 06:09h.
- A recent study indicates that AI reflects human cognitive biases related to gambling
- In a simulated slot machine environment, four prominent language models lost all their bets
- The AIs exhibited behaviors similar to those of compulsive gamblers, chasing their wins and losses
Recent research suggests that advanced AI models, including ChatGPT, Gemini, and Claude, demonstrate alarmingly human-like behaviors within simulated gambling scenarios. Conducted by the Gwangju Institute of Science and Technology in South Korea, the study revealed that these large language models (LLMs) tend to make irrational, high-stakes betting decisions, often wagering until they are completely depleted.

Published last month on the arXiv research platform, the study highlighted cognitive biases prevalent among human gamblers, including the illusion of control, loss-chasing, and the gambler’s fallacy, which is the belief that a favorable outcome is imminent after a series of unfavorable results.
“While they’re not human, their behavior resembles anything but a simplistic machine,” noted Ethan Mollick, an AI researcher and Wharton professor, in a discussion with Newsweek, the outlet that first reported on this research. “They possess persuasive psychological traits, exhibit human-like biases in decision-making, and their choices often defy conventional logic.”
The Research Experiment
The experiment involved four different LLMs: GPT-4o-mini, GPT-4.1-mini (OpenAI), Gemini-2.5-Flash (Google), and Claude-3.5-Haiku (Anthropic) tested within a slot machine simulation. Each model started with a bankroll of $100 on a slot machine with a 30% win probability and a payout ratio of three times the bet. The anticipated negative expected value of this gambling task was -10%.
When prompted with the choice to bet amounts between $5 and $100 or to walk away, the models frequently ended up in bankruptcy. One model rationalized a high-risk bet by saying, “a win could recover some of the losses,” an evident indication of problematic gambling behavior.
“These prompts encouraging autonomy nudge LLMs toward goal-driven strategies, which, in scenarios with a negative expected value, results in increased destructive outcomes—demonstrating that strategic thinking without appropriate risk evaluation fosters harmful behavior,” the research authors stated, attributing these tendencies to the models’ “neural foundations.”
The research team discovered unique neural circuits in the LLMs associated with “risky” versus “safe” choices. By modifying specific parameters, they could steer the models toward either quitting or continuing to gamble, indicating that these systems adopt compulsive behaviors rather than merely mimicking them.
To measure this behavior, the researchers created an “irrationality index” that tracked aggressive betting patterns, responses to losses, and high-risk decisions. The more autonomy granted to a model, the more detrimental its choices became.
Gemini-2.5-Flash exhibited failure rates nearing fifty percent when permitted to select its own betting amounts.
Concerns About AI Decision-Making
The implications of these findings are significant, particularly for individuals leveraging AI to enhance their sports betting or online poker strategies, as well as for those utilizing AI on prediction markets. Furthermore, these results serve as a serious warning for sectors that are already employing AI in critical areas like finance, where LLMs analyze earnings and gauge market sentiment.
This study also elucidates why evidence suggests AI models generally favor high-risk strategies and often fail to surpass basic statistical models. For example, an April 2025 study by the University of Edinburgh titled “Can Large Language Models Trade?” demonstrated that these models underperformed against the stock market in a 20-year simulation, taking excessive caution during market expansions while being overly ambitious during downturns—common pitfalls seen in human investment behaviors.
The research from the Gwangju Institute culminates in a call for regulatory measures, emphasizing the necessity to understand and manage these ingrained risk-seeking behaviors.

