AI for IT security: moving beyond the hype

By Matt Kraning, CTO at Cortex
Matt Kraning, CTO at Cortex discusses the importance of artificial intelligence as information security becomes more complex

Today, organisations and cybercriminals are locked in an arms race when it comes to artificial intelligence (AI) and machine learning (ML). Both actively seek connected tools and techniques to further their agendas, which makes staying on top of the capacities and capabilities of such technologies vital for cybersecurity professionals so that organisations can stay protected.

The value of these technologies lies in their ability to be leveraged as part of a holistic security solution to help protect organisations from cyberthreats. It may be surprising to learn that simply deploying them as part of your cybersecurity stack will not ensure complete protection, as not all uses of AI and ML are created equal.

Today’s cyberthreat landscape has many challenges, including the speed at which bad actors conduct their attacks. In order to address these issues, IT security leaders must look beyond the hype of AI and ML and should consider them as vital components of a comprehensive cybersecurity solution. Ultimately, the use of these technologies should be oriented to help IT security departments achieve the prevention of every type of attack, whilst allowing them to quickly respond to the ones they can’t.

More data equals smarter AI

Today, AI frameworks and models are readily available to end-users, often originating from academic and commercial contexts, and are commonly open-source. However, the quality and quantity of the data fed into these tools will be a key differentiator between your organisation’s use of AI and your competition’s.

AI and ML technologies become more useful by feeding them an abundance of rich data. This directly influences how AI and ML systems “learn” as more real-world data is fed into AI systems, the “smarter” they become, which will help IT security departments obtain better insights.

From an IT security point of view, just using one deployment or threat vector to learn from is not enough. Having a system that uses various sources of information from its users is essential. Even though having a large pool of environments and users to hone your AI and machine learning systems is helpful, it is equally important to adopt a system that is able to retain large volumes of both, as well as a rich mix of data sources.

Folding AI into operational processes

Also, as much as having data is an integral component for AI to be constructive, AI and ML technologies themselves also need to be embedded into operational processes. They should not be considered as separate entities, but rather as complementary technologies that when used together help to improve security operations.

AI techniques are most effective when they are combined with human insights to create hybrid systems. Integrating large-scale statistical pattern matching from machine learning with domain knowledge enables IT security departments to overcome the challenge that newly developed, previously unseen threats provide, which by definition have little baseline data to cross-reference against. Domain expertise also allows you to develop logic (which large-scale data analysis can also help with) in order to effectively look out for and guard against attacker toolkits and tactics.

Aggregating insights across different systems can often pose a problem, as it frequently results in unbalanced and skewed error rates across deployments. Solving this problem effectively requires IT security departments to put in place an AI system that incorporates both statistical insights from machine learning as well as domain-specific insights throughout the system, particularly to account for novel attacks. 

Automation playbooks for SOCs

All security operation centre (SOC) teams are challenged on a regular basis by increasingly more sophisticated threats that need to be addressed. Yet, not all of them have enough manpower to achieve this effectively using manual processes, and IT security professionals are increasingly likely to be faced with a series of problems on an ongoing basis, not just one single problem.

Establishing a baseline of normal operations and creating alerts for potential anomalies is a common use for AI and ML tools in IT security. This can then be used to refine operational effectiveness by recognising the tedious and repetitive tasks that workers are constantly carrying out. The technology can then develop or recommend automation playbooks that can save both time and resources.

These automation playbooks subsequently allow IT security professionals to prevent multiple risks from growing into security incidents by removing manual processes across security operations. This helps to strengthen the team’s capacity and allows analysts’ abilities to be put towards doing different work that is best fitted to their experience.

Thinking carefully about how AI and ML fits into your organisational IT security strategy is therefore vital. The same way that having access to large amounts of data in combination with machine learning approaches can create a "smarter” and more useful AI system, it is just as important to combine this with non-AI capabilities such as domain knowledge to accurately interpret the data.

AI used strategically with ML can elevate your security management, provided it is utilised in a way that is delicately woven into the DNA of your operational processes. Ultimately, using AI and ML in an organisation’s security structure allows SOC teams to do more with greater efficacy, yet with fewer people at hand.

Share
Share

Featured Articles

Accenture Commits to Expanding its AI Vision with Adobe

Focusing its AI strategy on company transformation, Accenture partners with Adobe to develop industry-specific solutions using Gen AI to empower businesses

Businesses are not ‘Data Ready’ for Gen AI, says Alteryx

A report by Alteryx finds that organisations must prepare, as they are not ready to unlock real value from Gen AI as a result of insufficient data stacks

TacticAI: Google DeepMind Pioneer a Sports-Led AI Assistant

Google DeepMind’s TacticAI has been launched as part of a research collaboration with Liverpool FC to transform the sporting experience with AI

Bumble: Harnessing AI to Power Human Relationships

Data & Analytics

Kheiron Medical Technology can Detect Cancer with AI Test

Data & Analytics

Who is Mustafa Suleyman? DeepMind Founder Turned AI CEO

Machine Learning