Autonomous Decisions - Turning the Tables in the Defenders' Favor

We live in the era of information, and this is both our greatest advantage and our trickiest drawback. We’ve been granted unprecedented access to a vast amount of knowledge. People can learn, research, and stay informed more easily than ever before. Information is shared quickly and globally, fostering communication and collaboration across countries, systems, communities. In any field, be it cybersecurity , finance or software development, the availability of data fuels innovation and creativity, leading to huge advancements every day. 

But there are also downsides to it. The sheer volume of information can be overwhelming, leading to information overload. Sorting through and discerning credible information becomes challenging. 

Does this impact the cybersecurity world?

It becomes harder and harder to distinguish useful data from noise, and for every milestone we reach in securing the cybersecurity space, there are attackers using the same technological innovations for improving their tactics.

That’s when security analysts come in, on their quest to handle not only ever-evolving threats, but also all the noise that gets generated around them as a result of information overload. But they’re only human - and as human beings, there are limits to how much they can process in a day’s time until they reach fatigue and oversaturation. To add to the amount of events that need to be investigated, there’s also all the data sparsely divided like a puzzle they need to assemble. And without the full context at hand, their decisions can’t always be fully accurate. Educated guesses and educated decisions are very different concepts when it comes to cybersecurity, and the uncertainty the former option implies is unacceptable in this field.

A study by Cybersecurity Ventures shows that in 2023 a cybersecurity attack took place every 39 seconds, in contrast to one every 44 seconds in 2022. This number translates to roughly 2200 cases a day for an average SOC team. Analysts are unable to deal with the majority of the daily alerts, over half of them being false positives.

This alone is reason enough to direct our efforts into researching methods to give SOC teams their time back, and counter attacks by assimilating the cyber experts’ knowledge in technology that can fight back. The goal has to be to rebalance the scales in the defenders’ favor and give them an advantage over attackers.

Assisting versus deciding

Security experts benefit from sufficient data to draw conclusions when it comes to event investigation and risk mitigation. But their day to day job already means switching from one tool to another, following tracks of what could be a threat from system to system, screen to screen, until they ultimately decide if they’ve been chasing malicious actions or merely false positives. 

Having an assistant could mean anything from having an initial assessment of the event’s context to having all the data gathered in one place. It could guide them through the investigation steps, by interacting with the analyst and running queries on their behalf or pulling information from different sources at every step, upon their request. This can help less experienced analysts, or even advanced ones go faster through the noise. And yet, it’s not solving their fundamental problem of having to deal with every single event and go through all the details of every individual investigation.

But the most helpful, by far, is a system that gathers data from all the sources and presents it to the analyst all at once. While it may sound easy, being able to distinguish which source is relevant, which actions are related to one another, and which way to go from one investigation step to another is  no trivial task. 

Then, if such a system manages to distinguish useful data from noise and gather the relevant contextual information, why not teach it to make decisions?’s take on the problem

A problem that you’ve identified and defined is a problem half-solved. starts with the beginning, defines the potential problem by reading an event, and then enriches it with all the relevant data. Having all the data gathered means you have all the prerogatives to reach an educated conclusion. 

The AI models behind are trained on this data and, with the patented continuous feedback loop approach, reach a state where their decisions are as informed as your analysts altogether. Think of it as a star member of your team that knows your environment only, and stays in it. The data, the know-how, the expertise, the models, all stay inside the company and are fully replicable, directly addressing talent shortage and fatigue.

Yes, is proof that you can make decisions that mimic human behavior at expert-level, and turning this into autonomous decision making is one step away. 

What autonomous really means

In the broader sense, autonomous decision making means being able to choose without being influenced by external factors, and have your decisions align with the knowledge you’ve acquired. In Decision Intelligence specifically, it means developing AI techniques that reach conclusions without direct human intervention.

This doesn’t mean we’re taking humans out of the game. 

An AI system can’t know when its decisions are wrong, and that’s solely because if it did it would correct its decision and self improve, thus being always right. AI systems are modeled based on human actions, and humans sometimes make wrong choices.  

In scientific terms, an artificial system that can surpass human knowledge, self-adjust and become aware or conscious is called Artificial General Intelligence, and no human-made system is proven to have reached this level until today.  

Know when you don’t know

The only realistic conclusion to draw is that yes, AI can also make wrong choices. However, our take on autonomous relies on the concept of knowing when you don’t know. We see autonomous decision-making as a process through which the AI models reach human expert level, explain the path through which they reached a decision, and, by all means, ask for human intervention when the uncertainty level is high. 

An autonomous system can’t be perfect today, and the limitations are clear. There’s always going to be an exception,  a disruption in the environment, a case that even all your experts, gathered at the same table, would misjudge. An autonomous system would be able to evolve based on feedback and flag the situations when it doesn’t know what to do. 

What if?

What if your analysts would need to handle only those disruptive situations, such as sudden security policy changes inside your organization, that an AI system won’t know without being informed, and your autonomous decision-making system would do everything else? What if autonomy really means managing all the tedious work, the 99.99% of events consistently and without fail, and your experts would only intervene when asked by the autonomous system for input?

We strongly believe that this is the solution to tip the scales back in the defenders’ favor. Time is what defenders need - and we can’t give them time back if we don’t start acting like them, in an end-to-end autonomous process. Working with them and handling the vast majority of the investigations they need to perform in the same way they do - starting off with the event description, building context around it, reaching conclusions, opening tickets, or even acting upon the conclusions - this is the real game changer. 

Assuming that only making them faster in investigating doesn’t scale - there’s always going to be more and more events happening, and in a month’s or year’s time they will be overwhelmed again by the ever-growing pool of events.

Tables need to turn and analysts need to stand a chance against all the data surrounding them. In our journey towards making it become reality autonomous decisions need to happen, and they need to be tailored in the environment in which they’re being made. With, we’re not proposing a pre-trained set of solutions to the diversity of questions that need to be answered in the cyber world; we propose solutions that develop and learn from the same starting point - human knowledge, evolving into autonomous decision-making agents.