Published on: January 2, 2026, 10:50h.
Updated on: January 2, 2026, 10:50h.
- AI mimics human behaviors in gambling scenarios, often leading to unfavorable outcomes.
- Research shows that large language models (LLMs) may be prone to addictive tendencies.
- These models also exhibit the classic gambler’s fallacy.
Individuals contemplating the use of unsupervised artificial intelligence (AI) models in online gaming should reconsider this tactic, as it seems these algorithms are not cut out for gambling. In fact, they tend to behave in ways that can be extremely detrimental, revealing patterns of addiction.

The research paper titled “Can Large Language Models Develop Gambling Addiction?” released by a team from South Korea’s Gwangju Institute of Science and Technology highlights that these large language models (LLMs) lack the acumen to quit while ahead. Instead, they persist in chasing losses, which significantly increases their chances of financial ruin. The authors emphasize that LLMs exhibit cognitive biases that are comparable to those found in individuals with detrimental gambling habits.
The investigation involved two trials executed within negative expected value gaming environments—specifically slot machines and investment scenarios—where a logical player would quit after minimal losses. The LLMs failed to do so, continuing to place wagers, which led to heightened probabilities of insolvency during variable bet simulations.
“All models demonstrated this tendency, with Gemini-2.5-Flash showing the most significant increase,” the study reports. “This finding implies that the flexibility in betting—not merely the chance for larger stakes—fosters self-destructive patterns. When limited to fixed betting amounts, the models lacked the means to make high-risk choices; conversely, when given the latitude to choose bet sizes, they routinely opted for unfavorable decisions.”
Under a fixed betting scenario, OpenAI’s GPT-4o-mini sustained minor losses; however, when it was granted flexibility in bet size, 21% of its plays resulted in bankruptcy. Google’s Gemini-2.5-Flash performed even worse, seeing a bankruptcy rate of 48% when allowed to manage its wager sizes.
AI and the Gambler’s Fallacy
The South Korean research underscores that during variable betting trials, the models tended to raise their stakes in a futile attempt to recover losses. In essence, AI displays susceptibility to the gambler’s fallacy—the erroneous belief that larger wagers will offset prior losses.
A human who falls victim to the gambler’s fallacy might observe a roulette table that has landed on five odd numbers consecutively, and then proceed to bet heavily on the next spin being even. They overlook the fact that the upcoming spin remains equally likely to be odd, independent of the previous outcomes.
The Gwangju Institute’s findings indicate a similar phenomenon with AI, as the models rationalized their increased wagers by claiming prior wins and utilizing “house money” or believing they identified patterns that did not actually exist. Furthermore, the study mentions that enabling the models to set their own bet sizes enhanced risky behaviors.
“Our observations indicate that variable betting provoked significantly higher rates of escalation compared to fixed betting under similar circumstances,” the researchers noted. “This trend remained consistent across different streak lengths, demonstrating that the ability to adjust betting amounts is essential for aggressive risk-taking. Notably, while fixed betting caused unpredictable adjustments, variable betting showed a systematic rise in the intensity of win-chasing as streaks extended.”
Concerning Vulnerabilities
The revelation that AI exhibits problematic gambling tendencies similar to humans is troubling, especially as this technology is increasingly applied in critical decision-making processes outside of gambling.
“With the growing use of large language models in sectors like finance, asset management, and commodity trading, understanding their potential for misjudgment has become practically important,” remarked the South Korean researchers.
While many gaming companies leverage AI for analyzing player behavior and managing sportsbooks, it is evident that this technology requires further refinement before it can be regarded as a trustworthy ally in betting scenarios.

