www.managedITmag.co.uk 29 The relationship between machine learning (ML) and cybersecurity began with a simple yet ambitious idea. Let’s harness everything algorithms have to offer to help identify patterns in massive datasets. Before this, traditional threat detection relied heavily on signaturebased techniques – essentially digital fingerprints of known threats. These methods, while effective against familiar malware, struggled to protect against zero-day attacks and the increasingly sophisticated tactics of cybercriminals. Eventually, this created a gap, which led to a surge of interest in using ML to identify anomalies, recognise patterns indicative of malicious behaviour, and ultimately predict attacks before they could fully unfold. Some of the earliest successful applications of ML in the space included spam detection and anomaly-based intrusion detection systems (IDS)1. These early iterations relied heavily on supervised learning, where historical data – both benign and malicious – was fed to algorithms to help them differentiate between the two. Over time, ML-powered applications grew in complexity, incorporating unsupervised learning and even reinforcement learning to adapt to the evolving nature of the threats at hand. Alas, all is not as it seems In recent years, conversation has turned to the introduction of large language models (LLM) like GPT-4. These models excel at synthesising large volumes of information, summarising reports and generating natural language content. In the cybersecurity space, they’ve been used to parse through threat intelligence feeds, generate executive summaries and assist in documentation, all tasks that require handling vast amounts of data and presenting it in an understandable form. As part of this, we’ve seen the concept of a ‘copilot for security’ emerge – a tool intended to assist security analysts like a coding copilot helps a developer. Ideally, the AIpowered copilot would act as a virtual Security Operations Center (SOC) analyst. It would not only handle vast amounts of data and present it in a comprehensible way but also sift through alerts, contextualise incidents and even propose response actions. So far, reality has not lived up to the vision. Despite promising utility in specific workflows, LLMs have yet to deliver a transformative, indispensable use case for cybersecurity operations. Why is that? Modern cybersecurity is inherently complex and contextual. SOC analysts operate in a high-pressure environment. They piece together fragmented information, understand the broader implications of a threat and make decisions that require a nuanced understanding of their organisation. Copilots can neither replace the expertise of a seasoned analyst nor effectively address the glaring pain points these analysts face. This is because they lack the situational awareness and deep understanding needed to make critical security decisions. Rather than serving as a dependable virtual analyst, these tools have often become a ‘solution looking for a problem’ – essentially adding another layer of technology that analysts need to understand and manage, without delivering equal value. While a tool like Microsoft’s Security Copilot shows promise, it has faced challenges in meeting expectations as an effective augmentation to SOC analysts, sometimes delivering contextually shallow suggestions that fail to meet operational demands. Using AI to overcome AI barriers Undoubtedly, current implementations of AI are struggling to find their stride. But if businesses are to truly support their SOC analysts, how do we overcome this barrier? The answer could lie in the development of agentic AI systems capable of taking proactive independent actions, helping to bridge the gap between automation and autonomy. Its introduction will help transition AI from a helpful assistant to an integral member of the SOC team. Agentic AI offers a more promising direction for defensive security by potentially allowing AI-driven entities to actively defend systems, engage in threat hunting and adapt to novel threats without the constant need for human direction. Instead of waiting for an analyst to interpret data or issue commands, agentic AI could act on its own, isolating a compromised endpoint, rerouting network traffic or even engaging in deception techniques to mislead attackers. Such capabilities would mark a significant leap from the largely passive and assistive role that AI currently plays. Typically, organisations have been slow to adopt any new security technology that can take actions on its own. And who can blame them. False positives are always a risk, and no one wants to cause an outage in production or to stop a senior executive from using their laptop based on a false assumption. Attackers don’t have this handicap. Without missing a beat, they will use AI to steal, disrupt and extort their chosen targets. The only way for businesses to combat this threat (and relieve overwhelmed SOC teams) is to join the arms race and use agentic AI. Its ability to take proactive autonomous actions will allow organisations actively to engage in threat hunting, defend systems and adapt to novel threats without requiring human involvement. uk.insight.com Businesses must overcome their fear of adopting new technologies if they want to protect themselves from evolving cyber threats, says Rob O’Connor, EMEA CISO at Insight It’s time to put our trust in the machine CYBERSECURITY Rob O’Connor Reference: 1. A comprehensive review of AI based intrusion detection system – ScienceDirect
RkJQdWJsaXNoZXIy NDUxNDM=