Yaron Kassner, CTO of Silverfort, delves into the pros and cons of transparency when it comes to cybersecurity tools’ algorithms.
Many cybersecurity tools use engines that calculate risk for events in customer environments. The accuracy of these risk engines is a major concern for customers, since it determines whether an attack is detected or not.
Therefore, organizations often request visibility into how a risk engine actually works. Let’s consider whether disclosing a security product’s algorithm is the best approach.
The Pros of Visibility into a Risk Engine
On the one hand, providing visibility into a risk engine enables an organization to know exactly what it is buying and to test the capabilities in a proof of concept (PoC). It also provides the buyer with a sense of control. Some vendors allow customers to modify the parameters of their risk algorithm in order to fine-tune results based on their specific needs.
But while this approach allows far greater customization, only a small number of companies have the resources and domain expertise required to make modifications that can distinguish between normal behavior and an attack.
In addition, understanding the risk algorithm enables customers to distinguish between bugs and algorithm