UWE Bristol researchers develop novel defence against adversarial machine learning attacks on Cyber Security Intrusion Detection Systems

Posted on

As cyber attacks evolve in their sophistication, Intrusion Detection Systems (IDS) have often been seen as a way to mitigate threats on computer networks.

Yet, attackers continue to evade detection and cause disruption through the spread of malicious software and other common attack processes. There is a growing trend of being able to evade machine learning systems to conduct attacks, by effectively compromising the intended functionality of the machine learning system.

Recent work by Andrew McCarthy, a PhD student at UWE Bristol studying cyber security analytics, has been able to demonstrate both the feasibility of conducting such attacks against Intrusion Detection Systems, as well as proposing a novel approach to combat against the vulnerabilities that machine learning classifiers may exhibit.

Whilst the domain of adversarial machine learning often addresses computer vision systems, this cutting-edge research applies these concepts in cyber security, to understand what future threats may look like, and how best to develop Intrusion Detection Systems to avoid such vulnerabilities.

The results of Andrew’s recent PhD work have just been published in the high-ranking Journal of Information Systems and Applications (Elsevier). Andrew is in the final stages of completing his PhD study, working with Professor Phil Legg (Director of Studies) and supported by industry partner Techmodal through the UWE Partnership PhD scheme.

The full paper is available online.

Back to top