Three Limitations of AI-Driven Cyber Attacks
19:00 - September 13, 2023

Three Limitations of AI-Driven Cyber Attacks

TEHRAN (ANA)- Richard Ford, the CTO of Integrity360, outlines and debunks the myths surrounding AI-driven cyber attacks.
News ID : 3576

From contextual threat prioritisation to task automation, Artificial Intelligence (AI) has been a key cybersecurity trend for several years, the Innovation News Network reported.

However, the controversy surrounding the implications of increasingly sophisticated technologies has reached new heights following the release of several impressive natural language processing tools.

Be it ChatGPT, Microsoft’s Jasper AI, or several other alternative chatbots, industry concern is mounting over the potential for such applications to be used to create sophisticated malware and hacking tools.

From writing in the specific style of authors to producing complex streams of code, this wave of AI models has truly broken new ground, completing various tasks in a highly sophisticated manner.

Yet there are now worries that the ability of such tools to harness huge amounts of data and expedite processes could be used and abused by nefarious actors.

But exactly how much truth is there to this? Are such concerns valid?

Some industry players are already exploring the potential security implications of ChatGPT and similar tools to obtain an understanding of the threats that they pose.

CheckPoint, for example, developed a phishing email on the platform that included an Excel document carrying malicious code which could be used to download a reverse shell to a potential victim’s endpoint.

Of course, experiments such as these are concerning.

However, many of these specific use cases don’t reflect the current state of AI technology, nor the lengths that cybercriminals are typically willing or able to go to.

It’s important to recognise that most cyber attacks are relatively basic in nature, carried out by amateur hackers known as ‘script kiddies’ who rely on repeatable pre-built tools and scripts.

With that said, there are even more sophisticated attackers that have the skills to uncover vulnerabilities and develop tailored, novel threat techniques to exploit them.

The truth of the matter is that current AI tools simply aren’t advanced enough to create malware that is more dangerous than those strains that we’re already facing.

Current AI tools are limited in several ways, preventing them from creating the kind of malware that many people fear.

Current AI tools struggle to navigate situations where there is no definitive answer.

We see this in cybersecurity professionals’ own limitations in using AI for defensive purposes – while such tools may flag suspicious activities, they still need human input to determine this for definite.

As AI improves in sophistication, some tools are beginning to emerge that are improving in their ability to deal with ambiguous situations, yet these are both relatively rare and not often in the arsenals of threat actors.

We must also recognise that all AI engines, regardless of whether they have been built for good or bad purposes, are limited by the data they ingest.

Like any other model, a malware-specific AI algorithm would need to be provided with massive amounts of data to learn how to evade detection and cause the right kind of damage. And while such datasets undoubtedly exist, there are further limitations as to exactly how much AI models can learn from them.

Critically, it’s also important to understand that human brains ultimately remain superior to Artificial Intelligence at the minute.

While AI has shown it is incredibly useful in accelerating processes, automating tasks, and identifying threats in a largely accurate manner at speed, these models still require the support and knowledge of experienced cyber professionals.

Indeed, we still need a combination of technology and skilled people in cybersecurity, and this is no different in cybercrime.

That is not to say that there are no risks at all. Indeed, the primary security concern surrounding ChatGPT and its alternatives at present, is the potential for such tools to be used to democratise cybercrime.

4155/v

Send comment