Deepfakes Undermine Digital Trust: How Criminals Use AI for Phishing
brussel, dinsdag, 30 september 2025.
Criminals are increasingly using AI technology to create deepfakes, significantly increasing the risk of phishing. These practices undermine digital trust in businesses and have a significant impact on the security and integrity of online communication. According to ISACA, an international organisation for IT professionals, this has been a known and growing problem since 2024. 82% of European IT and business professionals believe that digital trust will become more important over the next five years, but only 25% of companies provide specific training for their employees.
The Growing Threat of Deepfakes
Criminals are increasingly using AI technology to create deepfakes, significantly increasing the risk of phishing. These practices undermine digital trust in businesses and have a significant impact on the security and integrity of online communication. According to ISACA, an international organisation for IT professionals, this has been a known and growing problem since 2024. 82% of European IT and business professionals believe that digital trust will become more important over the next five years, but only 25% of companies provide specific training for their employees [1].
Practices and Impact of Deepfakes
Deepfakes are hyper-realistic forgeries that are difficult to distinguish from real videos and audio recordings. They are often used for phishing attacks, where criminals attempt to gain access to sensitive information or financial resources. In recent months, several cases have been reported where deepfakes were used to impersonate high-ranking executives, leading to substantial financial losses [2]. According to a recent study, there has been a 45% increase in the use of deepfake technology by criminals in recent months [5].
Combating Deepfakes
To combat the spread and impact of deepfakes, businesses and governments are investing in AI detectors that analyse lip-sync, micro-expressions, or lighting. These tools are useful but must be integrated into processes for optimal effectiveness [1]. Governance is crucial: who escalates, how quickly, what evidence is retained, and who speaks publicly when. The EU AI Act and transparency requirements for synthetic media raise the baseline, but do not replace internal discipline [1].
Media Literacy and Democracy
The spread of deepfakes also has serious implications for media literacy and democracy. Both citizens and media professionals must be able to identify fake news and false information. Experts warn that the impact of deepfakes on digital security has significantly increased since the beginning of the year [5]. Increased awareness campaigns about the dangers of deepfakes have been launched, and international cooperation is seen as crucial in the fight against deepfake crimes [5].
Practical Tips for Identifying Deepfakes
To reduce the risk of deepfakes, experts and authorities offer practical tips for readers to identify fake news. Here are some key pointers:
- Look for subtle irregularities: Deepfakes may show small inconsistencies in lip movements, micro-expressions, or lighting.
- Check multiple sources: Compare information from different, reliable sources to confirm authenticity.
- Use AI tools: Various online tools and apps are available to help detect deepfakes.
- Stay informed: Follow the latest developments and updates in the fight against deepfakes, such as new legislation and technologies.
- Trust official channels: Official government channels and established news media are generally more reliable than unknown or anonymous sources [1][5].