Skip to the content

Exploring the Value of AI in Cybersecurity

Exploring the Value of AI in Cybersecurity
Exploring the Value of AI in Cybersecurity

AI usage has skyrocketed in the time since the launch of ChatGPT in 2022, and this includes a surge in AI being used as part of cybercrime. With AI also prone to make mistakes and be vulnerable to cyberattacks, it is understandable that there be concerns about using AI in cybersecurity.  

In this article we will explain AI in simple terms and how it is being leveraged by both cybercriminals and cybersecurity professionals. 

 

What is AI in simple terms? 

Artificial intelligence (AI) refers to computer systems that can perform tasks that usually require human intelligence, such as speech and image recognition, language translations, creative writing, analysing data, and decision making.  

A key part of how modern AI works, is through Machine Learning (ML), which teaches systems to make predictions and decisions without being programmed specifically. Deep learning (which is a form of ML) uses layers of networks, like the human brain does, to solve complex problems. 

 

How long have we had AI? 

AI has actually been around for a lot longer than you might think, the idea of an intelligence that could be created has been a part of legends, poetry, and art as far back as ancient Greece.  

The term ‘artificial intelligence’ was introduced in 1956 by a group of scientists who were working on the early research and computers that would later become the AI we know today. Unfortunately, computers in the 50’s and 60’s did not have the power to fully realise a lot of the theories and programmes built at that time. Another problem was that huge amounts of information needed to be given to AI and there were limitations on how that information could be input and stored. This meant it took several decades of research and development in computer science and technology before AI could really take off.  

As computing power grew, many elements of AI were developed but because they were assimilated by the industries and applications that used them, we don’t really think of them as AI. For example, the programme that can recognise and read postcodes for mail carriers to process letters and parcels quicker, is a form of AI. Another example is how in the 1990’s UK, banks used rule-based AI systems to assess the risk of loan applicants based on credit history and other factors.   

During this time, our knowledge of how the human brain works was also improving. This gave way to concepts of ‘artificial neural networks’ as we mapped the neural networks in human brains.  

Over the last 20 years, technology evolved enough to make AI as we know it today, a reality. The internet helped with addressing information gathering issues and cooperation between industries and scientific disciplines lead to breakthroughs in the field of AI. Although ChatGPT was only released in 2022, the science behind it, came from scientific developments in 2017 known as ‘transformer architecture’. Only after decades of research and development, then combining artificial neural networks and transformer architecture, with access to large amounts of data to learn from, was a new form of AI known as Generative AI able to exist. 

Generative AI creates new content based on their learning such as text, images, code, and even music. Large Language Models, which is what ChatGPT is, is a form of text-based generative AI.  

 

How is AI used in cybercrime? 

Now you know that AI has been around for longer than since ChatGPT launched, it makes sense that malicious actors have been leveraging it for some time. However, now more AI systems are public and free to access, the risk and impact of cyber-attacks is growing. For more information on how much the threat is growing, the National Cybersecurity Centre has published an assessment of the impact of AI on cyber threat 

But how is Ai used to carry out cyber-attacks? Threat actors use AI to enhance the sophistication and effectiveness of their attacks. This can be through: 

  • Generating highly convincing phishing emails and chatbots that are harder to recognise as such because the classic scam signs are missing.   
  • Injecting machine learning into malware so the malware can explore and learn about target systems, launching attacks without human intervention.  
  • Modifying the behaviour of malware to evade detection for longer, mimic legitimate users, and determine when/what attacks will do the most damage.  
  • Creating deepfake audio and video to impersonate trusted individuals. 
  • Automating tasks that would take humans a long time such as password guessing, scanning for vulnerabilities, and data scraping to build comprehensive profiles on targets. 

Using AI for reconnaissance and automating, vastly increases the scale and success of attacks by malicious actors so how can cybersecurity consultants counteract this? AI is the answer.  

 

How is AI used in cybersecurity? 

Many of the same features of AI like automation and data scraping can be put to a positive use against threat actors.  

  • Learning from the known malicious IPs, domains, and applications help AI flag similar suspicious activity without being programmed to know them in advance.   
  • Automated incident response can execute countermeasures to cyberattacks regardless of the time of day it happens, and quicker than humans can. 
  • Detection of insider threats by analysing the behaviour of legitimate users and detecting when those patterns differ from the norm.  
  • Gathering, analysing, and prioritising large amounts of threat intelligence, more than humans could research as quickly.  
  • Analysing visual and text elements of phishing websites and emails to detect imitations and scams.  
  • Real-time detection of anomalies and flagging potentially fraudulent activities that trigger additional verification steps.  
  • Cross-referencing vast amounts of data across different endpoints, networks, and logs to uncover signs of stealth infiltration.  
  • Create honeytokens which are fake digital assets that do not function except as strategically placed bogus files that are disguised as valuable. Honeytokens act as digital traps that when triggered, automatically start remediation steps.  
  • These are just some of the ways cybersecurity consultants use AI to counteract how threat actors use it. The expected follow-up question however, is can we trust AI to perform these countermeasures effectively and securely? 

 

Can AI in cyber defence be trusted? 

The short answer is no, without human intervention, AI is not capable of being as secure and effective as it needs to be. This is why the human element is so important. Cybersecurity professionals should only use AI solutions that have been extensively tested, are compliant with international regulations, and rarely make mistakes.  

Even then, it is still best practice to check that AI completed its job correctly. This can be done in a variety of ways: 

  • Manually testing and sharing the information with AI so it can learn from mistakes.  
  • Simulating attacks and measuring performance. 
  • Retraining and updating AI so it continues to function at peak performance.  
  • Auditing and learning why certain decisions were made, including checking for biases.  

Because AI itself is vulnerable to attack it is paramount that human management and intervention is carried out.  

 

Fuse CS are Microsoft Partners, and we use their cybersecurity tools, including Microsoft Defender, Microsoft Sentinel, and Co-Pilot for Security. We offer a range of scalable cybersecurity services protecting businesses, schools, and non-profit organisations.  

About the author

Fuse

Fuse is a Microsoft Partner, based in Northampton. We help organisations of all sizes to maximise IT efficiencies through the use of Microsoft cloud computing solutions.

Let’s talk. We’d love to hear from you.