Acronis has released its cyber threats report for the second half of 2023, which provides an in-depth analysis and outlook on key cyber security issues and prevalent threats worldwide.
‘Acronis Cyberthreats Report, H2 2023’ leverages data collected from over 1,000,000 unique endpoints across 15 key countries.
Findings conclude that generative AI-enhanced phishing affected over 90% of organizations and contributed to a 222% surge in email attacks in 2023 as compared to the second half of 2022. As well as easily accessible generative AI, such as ChatGPT, cybercriminals are leveraging malicious AI tools, including WormGPT, FraudGPT, DarkBERT, DarkBART, and ChaosGPT.
AI-powered threats identified in the report include:
Spear phishing and AI-generated social engineering attacks
Using AI, cybercriminals can automate custom phishing campaigns that are incredibly convincing. Natural language processing (NLP) tools can now draft phishing emails that mimic the tone, style and vocabulary of genuine communications from trusted sources. Likewise, AI algorithms can analyze an individual’s online behavior to tailor deceptive messages that the recipient is more likely to trust and act upon.
Deepfake technology for impersonation
Deepfake technology uses AI to create convincing audio and video forgeries. Cybercriminals leverage this technology to impersonate senior executives in CEO fraud attacks, tricking employees into transferring funds or disclosing sensitive information. Such AI-altered content is becoming increasingly difficult to distinguish from authentic media, upping the ante for corporate security teams.
Automated exploit development
AI systems can rapidly analyze software and systems for vulnerabilities faster than human cybersecurity teams. Automated testing tools powered by AI can identify zero-day vulnerabilities which can then be exploited before companies have time to patch and protect against them.
Malware typically has a static behavior pattern, making it detectable by traditional security solutions. However, with the integration of AI, malware can now dynamically adjust its operations to evade detection, learn from environmental interactions or even deactivate if it detects a sandbox environment.
Cybercriminals are using AI to create more autonomous botnets that can optimize their attack patterns in real time. These botnets are harder to detect and shut down because they constantly evolve and seek new vulnerabilities in systems to exploit.