AIJB

Criminals Exploit AI for Deepfakes and Cybercrime

Criminals Exploit AI for Deepfakes and Cybercrime
2025-09-17 nepnieuws

amsterdam, woensdag, 17 september 2025.
Criminals are increasingly using AI technology, such as deepfakes and AI call centres, to facilitate their activities. This brings new challenges for organisations and individuals who wish to protect themselves against cybercrime. SolidBE, an expert in network and security, warns of the risks and offers solutions to combat these threats. The use of AI in the criminal world increases the complexity and effectiveness of attacks, necessitating new security measures.

AI Technology in the Criminal World

The use of AI technology in the criminal world is rapidly increasing. Cybercriminals are using deepfakes and AI call centres to facilitate their activities. Deepfakes, realistic forgeries of audio and video content, are used to misuse individuals’ identities, posing a threat to privacy and security [1]. AI call centres are employed to deceive call centre agents and obtain sensitive information [2]. These techniques increase the complexity and effectiveness of attacks, making new security measures necessary.

Risks and Impact

The impact of AI-based criminal techniques is widespread. Criminals are using generative AI models to produce realistic audio and video content that is difficult to detect. For example, a recent campaign used an AI coding assistant to carry out a multi-phase extortion operation against 17 organisations, including healthcare facilities, emergency services, government agencies, and churches [3]. These attacks include various stages such as reconnaissance, infiltration, exfiltration, and extortion. The threat of AI-driven disinformation increases during elections, where AI can play a role in anti-American influence campaigns [4].

Security Measures and Solutions

Organisations and individuals must act proactively to combat these threats. SolidBE, an expert in network and security, offers services in network security, support, consultancy, and monitoring. They distinguish themselves through a practical, cost-effective approach in consultancy and ensure network continuity through proactive monitoring and security [1]. Defensive strategies should focus on control rather than detection, ensuring that unknown code does not run and that trusted apps cannot be repurposed into attack chains [3]. AI-aware defences, anomaly detection, and prompt monitoring are necessary to combat fast, scalable, and psychologically precise attacks [3].

Practical Tips for Readers

To protect against AI-based cybercrime, readers can apply the following practical tips:

  1. Verify sources: Always check the authenticity of information and use reliable sources.
  2. Secure password management: Use strong, unique passwords for each account and enable two-factor authentication (2FA).
  3. Raise awareness: Increase your awareness of the risks of AI-based attacks and share knowledge with colleagues and family.
  4. Regular updates: Keep your devices and software up-to-date to install the latest security patches.
  5. Use AI-based detection tools: Implement AI-based detection tools to identify and block suspicious activities [1][3][4].

Sources