In today’s bulletin, Charlie looks at the ways in which cyber attackers and cyber attack defenders are using AI, and how AI can support their goals.
I recently discovered that, in a couple of weeks, I was due to be in two places at once – umpiring a large client exercise and delivering a talk entitled “How Hackers are Using AI (and what we can do about it)” at ScotlandIS’ ScotSoft Conference 2025. The exercise won, so my colleague Chris Butler is going to deliver the talk. As this talk has been sprung on Chris, I thought I would help by doing some research into the subject and giving him some notes of what we have found.
In my research specifically on the use of AI on cyber, both by hackers and by those defending against attacks, I found that we are on the cusp of a huge change. As with AI in business, in cyber, people recognise that a revolution is coming and it is going to change the way we all work and be creative, yet many organisations don’t quite know what this looks like, and are cautious of some of the security risks and the potential for AI to generate misleading or inaccurate results. Looking at the Zscaler report, it was reported that they had noticed a 3,464% year-on-year increase in the use of AI in 2024, so we definitely know that more and more organisations are using AI as part of their daily work. [1] In a CrowdStrike report, it was reported that 64% of cybersecurity professionals are already researching or using GenAI tools; nearly 70% intend to purchase within 12 months, so in the protective cybersecurity world, organisations are turning to AI tools to help protect their organisations or clients. [2]
There is also an arms race between offensive and defensive AI. In defensive AI, organisations are using AI to detect threats, see patterns, respond quicker and present data, making their operations more efficient. While hackers are using AI to automate attacks, produce better collateral for attacks, and to improve software. In my reading, many hackers and defenders are only starting to use AI tools and there are many experimenting with different ideas and ways of using the AI tools, and which will give the best return on investment. Each advance by one side, drives innovation from the other, making this a constant battle for supremacy between the two.
So how are defenders using AI to protect their organisations or their clients?
- They are using it to reduce workload and drive efficiency. Many CISOs or SOC leaders believe that AI can improve efficiency by 25% and they are using it, or going to be using AI, for threat scoring, report summarising, and routine tasks. They also feel it will reduce staff burnout by spotting anomalous behaviour, reduce false positives, and accelerate response, therefore allowing staff to concentrate on more complex tasks while AI does the more mundane.
- A number of reports outlined the financial savings and return on investment of using AI. IBM said that organisations using AI save $2.2M per breach on average, and CrowdStrike said firms expect 30% fewer incidents and 31% cost optimisation from platform consolidation. [3]
- Many organisations see AI as augmentation for the humans working in their organisation, and not as a wholesale replacement. Cyber leaders see AI as a force multiplier boosting productivity, accelerating onboarding, and improving data-driven decision-making, and that they have minimal concern over job displacement and that there is presently a consensus, humans remain essential.
- AI is used in the detection and prevention of cyber attacks. AI-aware DLP (Data Loss Prevention) systems have flagged millions of violations, including leaks of personally identifiable information (PII), source code, confidential financial/medical data, so AI is being used to prevent the loss of data from an organisation. Enterprises are also deploying AI-aware DLP that recognises when sensitive data is being sent into GenAI platforms (e.g. ChatGPT, Copilot), and then blocks or sanitises the content. AI is starting to be used to proactively identify threats and to bring them to the attention of humans who can patch the vulnerability before it is exploited. In July 2025, Google’s AI agent “Big Sleep” spotted and blocked a cyber exploit before it could hit, a first for artificial intelligence in threat prevention. [4] AI is enabling the creation of self-healing networks that can automatically detect faults, cyber attacks, or performance issues, and then reconfigure, reroute, or repair themselves without human intervention. AI is also being used for deception and active defence traps to produce honeypots, and moving-target techniques powered by AI catch attackers inside the perimeter.
On the flip side, hackers are using AI to improve their ability to hack and make themselves more efficient!
- At the most simple end, AI has allowed hackers to write much more convincing phishing or spear-phishing emails, more intimidating ransom notes, and gone are the days of very obvious phishing emails full of poor English.
- Deepfakes and AI-generated voices have been successfully used by hackers to scam companies out of several million dollars. [5] In 2024, UK engineering giant Arup lost $25 million USD in a deepfake scam in Hong Kong. A finance employee was tricked into joining a video call with what appeared to be the UK CFO and a number of senior executives. All participants were AI-generated deepfake personas. A finance employee in Hong Kong did what they were asked to do on the call and authorised 15 separate transactions across five Hong Kong banks. The scam was only discovered when the company, a few weeks later, realised the millions were missing. As much of our business life is online, this type of scam can only grow, but it is facilitated by cheap and hyperrealist AI generated people.
- AI-enhanced algorithms have accelerated brute force/dictionary attacks, with people still reusing the same password across multiple platforms, and mixing passwords in their business and private lives. [6] Once an attack has managed to get one password, they will use it across multiple platforms to see if it is successful. AI can also help automate the checking of lots of different platforms.
- AI now out-performs humans in solving CAPTCHA puzzles and so organisations will have to find a new way of replacing them. As a side note, CAPTCHAs do my head in and I am always failing them!
- At present, the reports I have read say that attackers are only just starting to use AI to develop malware and develop hacker tools. In the article in Wired, attackers were starting to use Claude to identify targets, develop malware, tactical/strategic attack decisions, and exfiltrate data. [7] ESET discovered PromptLock – the first AI-powered ransomware proof-of-concept – generating malicious scripts on the sly, so attacks are beginning to be automated, saving hackers time and allowing hackers to be able to attack more targets.
- Criminals have created sham AI platforms (e.g. Flora AI) to distribute malware, as the desire to use AI increases. If an AI site looks the part and is cheap, then it will attract users who can be infected with malware.
- North Korean operatives, according to the BBC, are using AI to craft job applications, write code, and translate communications, securing remote jobs at Fortune 500 companies. [8]
AI is reshaping cybersecurity on both sides. For defenders: efficiency, savings, resilience. For attackers: automation, deception, speed. So AI is neither “good” nor “bad”, it’s a force amplifier and its impact is very much going to be about how it is used. [9]There will be a growing divide between those who embrace AI now and those who hold back, and for various reasons, don’t embrace it. Perhaps it is those who choose not to embrace it that may become the most vulnerable, and AI could have the greatest impact on them.
[1] Zscaler ThreatLabz (2025), AI Security Report & Expert Analysis
[2] CrowdStrike (2024), State of AI in Cybersecurity Survey
[3] IBM Security & Ponemon Institute (2023), Cost of a Data Breach Report 2023
[4] Economic Times (16th July 2025), How AI agent Big Sleep became Google’s secret cyber watchdog
[5] 6Degrees (2024), How Hackers Are Using AI
[6] Morgan Stanley (2024), Cybersecurity in the Age of AI
[7] Wired (2025), The Era of AI-Generated Ransomware Has Arrived
[8] BBC News (2025), Anthropic AI ‘Weaponised by Hackers
[9] IBM Security & Ponemon Institute (2024), AI & Automation in Threat Intelligence: 2024 Report