AIJB

Unauthorised AI Tools Pose a Threat to British Businesses

Unauthorised AI Tools Pose a Threat to British Businesses
2025-10-20 journalistiek

londen, maandag, 20 oktober 2025.
More than seven in ten British workers use unauthorised AI tools at work, bringing significant security and ethical risks. According to research by Microsoft, 51% of these workers use the tools weekly, primarily for communication, reporting, and financial tasks. Only 32% are concerned about the privacy of entered customer or business data. Experts advocate for stricter rules and awareness campaigns to combat this growing threat.

The Stealthy Use of Shadow AI

The use of unauthorised AI tools, also known as shadow AI, is a growing issue in British organisations. According to recent research by Microsoft, more than seven in ten British workers use these tools at their workplace [1]. This figure rises to 51% for workers who use the tools weekly. This stealthy increase brings both security and ethical risks, with only 32% of staff concerned about the privacy of entered customer or business data [1].

Applications and Risks

The most common applications of these unauthorised AI tools are work-related communication, report creation and presentations, and financial tasks. Nearly one-quarter (22%) of staff use the tools for financial tasks [1]. These practices can lead to data breaches and other security incidents, as recently seen in Australia. There, personal information of citizens was leaked by a contractor using unauthorised ChatGPT [3].

Reasons for Usage

There are several reasons why workers use unauthorised AI tools. According to Microsoft’s research, 41% of British workers use these tools because they are accustomed to them from their personal lives, while 28% use them because their company does not offer approved options [1]. This suggests a shortage of approved and secure alternatives, reinforcing the need for stricter rules and awareness campaigns.

Benefits and Potential Drawbacks

While the use of AI tools can offer significant benefits, such as increased productivity and creativity, these tools also carry substantial risks. Microsoft states that AI tools save employees globally around 12 billion hours annually, but these benefits apply only to the use of approved tools [1]. Unauthorised AI tools can lead to data breaches, security incidents, and legal issues, potentially causing considerable damage.

Expert Advice and Recommendations

Experts, including Darren Hardman, CEO of Microsoft UK & Ireland, advocate for stricter rules and awareness campaigns to reduce the risks of shadow AI. Hardman emphasises that companies must ensure that the AI tools used are suitable for the workplace and not just for home use [1]. Additionally, it is crucial that organisations inform their employees about potential dangers and provide them with safe alternatives. Privileged Access Management (PAM) can play a vital role by managing access to AI tools and monitoring activities, which can lead to a 30% reduction in security incidents [4].

Sources