AI Cannot Replace Human Input When It Comes to Defeating Cybercriminals
While we grapple with the ethics of AI, large language models (LLMs), and debate how to regulate their use, there’s no doubt that these technologies will enable a higher volume of cyberattacks. This is going to add to the workloads of hard-pressed, under-resourced security teams who will have to ensure their response teams are up to the task, the Innovation News Network reported.
To protect themselves and keep their information secure, organisations will need to ensure they have a solid combination of tools, technologies, and human input to maintain capabilities that equal their adversaries.
Those who have a hacker mindset are most likely to outwit the criminals.
One immediate way that generative AI can be used to accelerate attacks is through the use of deep fakes and phishing attacks. But in the medium term, LLMs have levelled the playing field between sophisticated threat actors and the mediocre.
In the past, exploiting an SSRF vulnerability or reverse engineering a binary application were specialist activities that only a handful of professionals could master. Today, anyone can ask a chatbot to generate code to achieve sophisticated outcomes that were previously too difficult for most people to control.
AI stacks also add to the attack surface. As well as needing to protect the models themselves, companies will need to consider how to test for emerging vulnerabilities such as prompt injection, inadequate sandboxing, and training data poisoning. The Open Web Application Security Project (OWASP) last month released version 0.1 of the OWASP Top 10 List for Large Language Models.
These new technologies could contribute to both an increase in the volume and sophistication of cyberattacks. This comes at a time when organisations are already under considerable strain.
According to recent research, the UK has the highest number of cybercrime victims per million internet users in 2022 at 4,783 – that’s up 40% since 2020.
Perhaps even more worrying is the fact that one-third of organisations say they monitor less than 75% of their attack surface and 20% believe over half is unknown or unobservable.
It’s these unobserved blind spots where unknown vulnerabilities lurk that represent the greatest risk for organisations and the biggest opportunity for criminals.
Organisations have invested in automated and AI-augmented scanners for years to try to assess their attack surface and find vulnerabilities, with mixed results.
On the one hand, these tools are inexpensive and fast. But on the other they only detect a subset of known vulnerabilities and tend to output a lot of false positives.
Ultimately, they require human input to interpret the result, and this is a common theme with AI and ML-based tooling. AI is only as good as its operator.
What AI and automation lack is skilled human input. We can see this manifest in the vulnerability rewards space, where nearly 85% of bug bounty programmes result in a human, a hacker, reporting one or more vulnerabilities validated as high or critical that have been previously missed.
Hackers are early adopters of new technologies and they are already building LLMs into their vulnerability discovery toolchains.
Therefore, organisations must not become over-reliant on tools that incorporate functionality driven by Machine Learning and AI, as they are not infallible.
Combatting smart-thinking threat actors requires more than technology on its own. It needs human intelligence imbued with the attitude of a cybercriminal, which is why ethical hacking makes such a positive difference to security programmes.
Unlike internal teams that are limited by their size, and may be suffering from burnout, the ethical hacking community offers a wealth of diverse talent, varied domain expertise, and tenacity.
Hackers can work hand in hand with organisations to tailor their assessments to each unique technical environment and the cybersecurity tools that they have in place.
They provide independent, objective reviews of an enterprise’s security posture and highlight the gaps that need to be addressed.
Government departments and agencies that have seen the benefits of making ethical hacking part of their security initiatives include the UK’s National Cyber Security Centre with its vulnerability disclosure reporting programme, and the Ministry of Defence which worked with the ethical hacking community to build out its technical talent and bring in more diverse security perspectives.
The Home Office too has recognised the value that the ethical hacking community could deliver to all sizes and types of businesses across the UK and is looking at reforming the Computer Misuse Act to give them better protection.
Currently, this law criminalises some of the cyber vulnerability research and threat investigation activity carried out by UK-based ethical hackers.
The hope is that the Act will be revised to ensure that those who are acting in good faith are safeguarded from state prosecution and civil litigation. Without a community of ethical hackers to call upon, companies would find it much harder, if not impossible, to recruit and maintain the same level and breadth of cybersecurity expertise within their own teams.
Criminals will continue leveraging AI to find more ways of accessing and stealing business information, and organisations will inevitably seek new tools to prevent these attacks.
However, senior executives should be aware that while technology complements human input, it is not a substitute.
Instead, management and leadership should support a multi-layered approach to cybersecurity that utilises the best of both worlds – technical innovation and human insight.
Going forward, AI, tools and human input will all play their part in closing security gaps across attack surfaces that encompass infrastructure, software, applications, devices, and the extended supply chain. Integrated together, they will better support organisations to deal with evolving cyber threats and new vulnerabilities that could surface anywhere at any time.
4155/v