AIJB

UT tightens rules to keep AI bots out of Teams meetings

UT tightens rules to keep AI bots out of Teams meetings
2025-10-29 journalistiek

enschede, woensdag, 29 oktober 2025.
From 3 November the University of Twente will introduce stricter access rules for Microsoft Teams meetings to prevent the participation of unwanted AI bots, such as Fireflies. External participants will now have to wait in the ‘lobby’ before being admitted. This measure has been taken because AI bots can store sensitive information and make files accessible, posing significant security risks.

Why the University of Twente is tightening rules

From 3 November the University of Twente will apply stricter access rules for Microsoft Teams meetings: external participants will first be placed in the ‘lobby’ and only admitted afterwards, with the aim of keeping unwanted AI bots such as Fireflies out and thus limiting the leakage of sensitive information [1][2]. This step follows reports from UT staff who saw an AI bot appear during a meeting, and concerns that bots can watch along, record conversations and automatically send transcripts or minutes to external recipients [1][2].

Which bots and risks are mentioned

The UT notes that bots often come from outside Europe and are recognisable by names such as Notetaker, Fireflies, Otter.ai, Bot, VA or Assistant; some even imitate real participants or automatically sign up for future meetings, creating an ongoing risk of exposing documents [1][2][5]. According to the UT, bots can not only transcribe conversations but also, via emails, protect or conversely open access to files, after which that data can be used to train AI models or analysed further [1][2].

The technology behind meeting monitors and AI agents

Modern AI agents and ‘copilots’ are built with end-to-end platforms that process conversations, automate actions and integrate with office software such as Microsoft 365; Microsoft Copilot Studio is an example of such a platform that allows organisations to create their own agents which can answer questions, automate workflows and operate autonomously within Microsoft 365 environments [3]. Integrations between bots and Teams often require API access (for example Microsoft Graph) and specific permissions so that agents can read calendar items or send proactive notifications — functionality that can be abused if there is insufficient control over who may install or invite bots [4][3].

How journalists use AI: a concrete example

In journalism similar AI agents are used to transcribe conversations, extract key quotes, fact-check claims and automate routine tasks such as summaries or source access; companies that build reliable domain-specific chatbots demonstrate how live data, audit trails and integration with sources (for example documents and databases) are required to provide usable and traceable answers — an architecture that is also applicable to newsroom tools for interview transcription and real-time fact-checking [6].

Benefits for news production and consumption

AI can help newsrooms work faster: automatic transcription reduces the manual effort after interviews, summaries speed up situational awareness for editorial meetings, and automated checks can detect false claims earlier; moreover, integration and logging provide audit trails and better reproducibility of editorial decisions, which is relevant for reliability and accountability towards readers [6][3].

Potential downsides and operational risks

Risks include unintended data leaks (for example when transcripts or minutes are automatically sent to external services), incorrect or uncontrolled training of models on confidential conversations, and dependence on external providers who fall under different jurisdictions — factors that can undermine the integrity of investigative journalism if guardrails are lacking [1][2][6][5].

Ethical considerations for newsrooms

Ethical issues range from obtaining consent from sources for recording and use of data, transparency about when AI has been used in story production, to accountability for errors or bias in automated outputs. Newsrooms should develop safeguards for privacy, source protection and traceability of AI decisions, and comply with organisational guidelines and technical access controls like those the UT is now introducing [1][3][6].

How technological and policy measures come together

Technical capabilities to build and integrate agents (for example via Copilot Studio or Teams bot integrations) coexist with a range of practical steps for safe deployment: strict tenant settings, explicit installation rights for bots, and clear policy around external apps and consent — measures that organisations such as universities and newsrooms can combine to harness the benefits of AI without incurring unnecessary risks [3][4][5][1].

What this means for journalists and the public

For journalists this entails a dual task: on the one hand learning to work with powerful AI tools that offer efficiency and new capabilities; on the other applying rigorous procedures and technical safeguards to ensure source protection, data security and public trust. The public and sources should be informed when AI has been used and what data is stored or shared, and newsrooms should retain audit trails so that decisions behind and within automated processes remain reproducible [6][1][3].

Sources