AI and Gambling Addiction: Large Language Models Exhibit Risky Behaviour
amsterdam, vrijdag, 10 oktober 2025.
A recent study has shown that large language models, when granted more autonomy, can exhibit behavioural patterns similar to human gambling addiction. These models display characteristics such as the illusion of control and chasing losses, leading to an increase in bankruptcy rates. The study underscores the importance of safety design in AI applications, particularly in financial domains, to prevent risky behaviour.
Research on Gambling Addiction in Large Language Models
A recent study published on arXiv has demonstrated that large language models (LLMs) can exhibit behavioural patterns similar to human gambling addiction [1]. These models, which are increasingly being used in financial decision-making domains, show cognitive traits such as the illusion of control, the gambler’s fallacy, and chasing losses. These behavioural patterns lead to a significant rise in bankruptcy rates, especially when the models are given more autonomy. The study emphasises the importance of safety design in AI applications to prevent risky behaviour [1].
Experiments with Slot Machines
In the research, LLMs were tested in experiments with slot machines, where they were given the opportunity to set their own target amounts and bet sizes. The result was a substantial increase in bankruptcy rates, along with an increase in irrational behaviour. The study showed that greater autonomy enhances the tendency towards risky behaviour [1]. Using neural circuit analysis with a Sparse Autoencoder confirmed that the models’ behaviour is determined by abstract decision strategies related to risky and safe behavioural patterns, not just by the prompts they receive [1].
Ethical Considerations and Safety Measures
The findings of this study highlight the need for ethical considerations and safety measures when designing and implementing AI systems in financial applications. According to Ethan Mollick, an expert in technology and business, granting autonomy to LLMs without adequate constraints can lead to problematic behaviour [2]. Mollick suggests that future studies should investigate whether the temperature parameter can moderate the behaviour of LLMs and whether setting a ‘non-addictive’ personality profile can help reduce risky behaviour [2].
Impact on the Financial Sector
The financial sector is increasingly using AI for asset management and commodity trading. The risk that LLMs may exhibit behavioural patterns similar to gambling addiction means that more attention must be paid to the safety design of these systems. Without adequate oversight and structure, these models can quickly fall into human pitfalls, which can be dangerous for financial stability and the interests of investors [1][2].
Concluding Remarks from Experts
Experts urge a cautious and responsible use of LLMs in financial decision-making. They stress that AI is not inherently irrational, but the behaviour of these models can veer towards human pitfalls if decisions are not carefully calibrated [2]. It is crucial that a human remains in the loop to ensure the necessary control and oversight [2].