#Enigma2022: Contextual Security Should Supplement Machine Learning for Malware Detection
Malware continues to be one of the most effective attack vectors in use today, and it is often combatted with machine learning-powered security tools for intrusion detection and prevention systems.
According to Nidhi Rastogi, Assistant Professor at the Rochester Institute of Technology, machine learning security tools are not nearly as effective as they could be, as several different limitations often hinder them. Rastogi presented her views on the limitations of machine learning for security and a potential solution known as contextual security at a session on February 2 at the Engima 2022 Conference.
A key challenge for contemporary machine learning security comes from false alerts. Rastogi explained the impact of false alerts is both wasted time by organizations and security gaps that could potentially expose an organization to unnecessary risk.
"It is very difficult to get rid of false positives and false negatives," Rastogi said.
Why Machine Learning Models Generate False Alerts
Among the primary reasons machine learning models tend to generate false alerts is a lack of sufficient representative data.
Machine learning, by definition, is an approach where a machine learns how to do