The growing threat of cybercrime

As digital transformation sweeps through the globe, activities in all aspects of everyday life have become dependent on digital platforms. Consequently, the exposure to cybersecurity risks has also become a major concern.

According to the Identity Theft Resource Center annual data breach report, in 2023, there were a total of 3205 data compromises affecting more than 300 thousand victims in the United States [1]. Cybersecurity breaches can have devastating consequences such as financial loss, sensitive data leaks, data tampering or destruction, service disruption, et cetera. Just recently, the Indian Council of Medical Research was compromised, which led to the exposure of sensitive personal information of 81.5 million citizens of India, resulting in one of the biggest cybersecurity incidents of 2023 [2].

It is forecasted that the rate of cybercrime will only skyrocket in the following years as technology becomes intertwined with both work and personal life. From 2018 to 2023, the estimated annual cost of cybercrime has increased ninefold to 8.15 trillion dollars. Worse, it is predicted to reach 13.82 trillion U.S. dollars by 2028 [3]. It is also worth noting that all statistics only reflect recorded incidents and not the entirety of cybercrime due to the large number of unreported cases. Therefore, alongside the advancement of technology, maintaining sufficient cybersecurity to ensure availability, integrity, and confidentiality is also a problem that needs to be addressed.

The emergence of AI

Cybersecurity cases often involve large amounts of information in all stages, both before and after the incidents. With the constantly growing number of cases, manpower shortages and human errors are unavoidable. This is where the power of AI comes into play. With excellent capability in task automation and analysis, AI can significantly boost efficiency in handling large volumes of data.

One of the more prominent applications of AI in cybersecurity is threat detection. Cyberattack prevention and response require the constant monitoring of changes to detect anomalies. An AI model can be trained to be able to understand the normal state of the system. When an event that deviates from the norm is detected, the model can then output an anomaly alert. With a trained model, detection often resolves in microseconds. This technique is applied to discern malicious behavior patterns in a system network through anomalous traffic flows or abnormal log alerts. In a usual scenario where a system often consists of multiple machines running and establishing communications all at the same time, AI is superior at processing large streams of information compared to the human worker. Moreover, unlike the average Joe, AI does not rest and responds at an exponentially faster rate. Statistics show that while a human expert can analyze up to 12 thousand events per day, an AI can accelerate the process 1000 times, running through as many as 12 million events in the same time frame [4].

This capability is being used widely in handling threats such as malware, fraud, network attacks, phishing, et cetera, improving the overall security of networks and systems. Applying the same logic, the AI models learn to understand the normal state of a program, an operation, an email, or a transaction, and detect when such entities perform differently or contain unusual contents. Implementing AI into cybersecurity systems enables a proactive approach to threat intelligence since it can find hidden problematic patterns at an early stage. AI-powered solutions can also go one step further by pinpointing abnormal locations in the system, aiding human investigators in swift incident responses. Companies such as Microsoft Azure have been using AI solutions to significantly reduce threats analysis and detection times from months to minutes while reducing both downtime as well as recovery costs [5].

A double-edged sword

While the growing positive influence of AI in cybersecurity is undeniable, is it truly a trustworthy friend for cybersecurity? Like all technology, the intention lies within the user. AI is not exclusive and has become much more accessible to the common masses in recent years. This also means that hackers also have the power of AI at their disposal.

The most evident use of AI with malicious intent is through the means of social engineering attacks, exploiting human error to compromise users and systems. With the rise of generative AI where fake texts, videos, and audio can be generated at the press of a button, fake contents can easily be created to trick even the most experienced users. Hackers can exploit AI for convincing impersonation, identity theft, and manipulation [6]. Cases of deepfakes where close friends or family members are impersonated for scams are becoming increasingly more common, with a 3000% increase in 2023 alone [7]. This threat will only grow in years to come as AI models get more sophisticated.

It is not possible to talk about generative AI without mentioning the popular OpenAI’s ChatGPT. Since its initial release, there have been many who tried to exploit the model’s capabilities for malicious aims. While “ethics” or safety checks have been implemented into the system, ChatGPT still has other uses in writing phishing emails/SMS or generating harmful scripts for hackers. A study shows that AI-generated phishing emails are only slightly less effective (21%) at getting clicks from victims when compared to human-generated ones (27%). However, hackers can compose these emails at least 40% faster, which significantly boosts the success of attacks [8]. There have even been malicious models based on ChatGPT tailored specifically for attackers, offering malicious content generation [9]. This capability alone enables accessibility and efficiency for malicious actors looking to launch cyberattacks.

Security alongside AI

With the rapid advancement of AI technology, it is evident that the cybersecurity landscape is also shifting in response. Looking towards the future, AI will certainly continue to be a key actor. By leveraging the power of AI, digital systems can certainly improve cybersecurity with efficient data processing, better response time, constant monitoring, while reducing costs and labor. However, it is also important for both organizations and common users to be vigilant for cybersecurity threats born from AI technology, which will often target the human actor. Raising awareness, cybersecurity training, and good business practices are all vital measures in maintaining good security while reducing the element of human error.

 
Author FPT Software