
Why the emergence of agentic AI makes a secure baseline even more crucial
The AI revolution
Since a few years now, the impressive and highly funded progress made in AI has disrupted most domains of activity, from administrative work to education, from IT development to fundamental research… This article itself could even be written by an AI (it’s not — I’m deeply resisting that, as I strongly feel the need to make my own brain work at least a little).
Until a few months ago, what we had witnessed was mostly the result of using LLMs (Large Language Models) in an isolated manner, as conversational partners able to generate texts, images, videos, sounds… but without true autonomy or the ability to act on other systems. Even if the power of those models, trained over billions of elements of knowledge, was already truly impressive, their output was generally confined to a conversational window, where the answers or results of a prompt were given back to the human, that needed to copy/paste or export it somewhere else to use it in real life.
This is no longer the case, with the emergence and widespread adoption of agentic AI, capable of connecting to other systems, acting on behalf of humans, and performing sequences of tasks and reasoning at an unprecedented speed and scale. Now they both “think” (even if we could argue about this verb), and act. With a high speed, with a virtually unlimited knowledge… and with the systematic approach that we can expect from a computer-based system.
A previsible impact
This is truly a game changer for cybersecurity. Until now, automated and systematic testing and probing already existed in the form of vulnerability scanning, fingerprinting, port scanning, etc. However, leveraging that information to move to the next step of an attack — typically exploiting a vulnerability and initiating lateral movement — usually required a significant amount of human effort. Furthermore, that next step often involved trial and error, which was generally detectable and slow enough to be blocked by defense teams or systems.
But now we have the possibility to run digital agents, powered by AI, trained over huge vulnerabilities databases and cybersecurity knowledge, able to perform all the steps of an attack in an autonomous manner.
In cybersecurity, there is always a balance between attackers and defenders in an endless race. When a new type of attack emerges, a new type of defense usually follows. Agentic AI used for hacking purposes is not a new type of attack per se, but it changes the speed, effort, and level of knowledge required to perform attacks on a target. It doesn’t take much for an attacker to assign a target to an agentic AI and let it operate autonomously until it reaches the crown jewels. With a virtually infinite knowledge base, the agent makes fewer errors, acts quickly, and can systematically test all relevant possibilities to achieve its goal. Meanwhile, the human behind the attack is free to focus on other tasks. Moreover, advanced hacking knowledge may no longer be necessary, as the complex technical details are delegated to the AI agent.
What does this change from a company’s standpoint? In practice, it’s as if we have evolved from defending ourselves against a group of slow “amateurs” to facing an army of fast specialists, leaving much less room for our own mistakes or omissions.
Where are we now?
Let’s be honest: until now, luck and timing have often been on the defenders’ side. How many times has an ethical hacker or penetration tester demonstrated the ability to successfully breach, steal from, or infect a company’s IT system? Their success rate is known to exceed 60%. This also means that a malicious actor could have done the same — and if it hasn’t happened yet, it’s most likely because the company was lucky, or because the cost of launching an attack (in terms of time, effort, and legal risk) did not justify the potential benefits.
Having “free”, smart, autonomous AI agents drastically changes the situation. Attacking becomes easier and far less expensive. And it only takes one defensive error to lose the game. As a result, more companies become potential targets.
In this context, it becomes crucial for companies to adopt a systematic and exhaustive approach to security. “Defense in depth” is no longer sufficient; what is now required is “defense in breadth”.
Here comes CogTL!
This is where our CogTL platform plays a much needed role today. As attackers become smarter and capable of systematically testing all defenses, the first priority is to ensure they won’t find an easy weak spot — such as an unpatched vulnerability, a missing control on a newly deployed service or machine, or a lack of visibility in server logs. No hacker, whether AI-powered or old-fashioned, will start with highly sophisticated attack techniques. They will nearly always go for the low-hanging fruit if available.
These basic issues are easy to address with our platform. Ensuring complete coverage of existing controls day after day, preventing exceptions from persisting indefinitely, and avoiding the accumulation of small issues on the same assets — these are the core objectives of CogTL, and achieving them becomes straightforward, yet with a significan impact on risk reduction. This is what Cognitechs aims to deliver: a continuously healthy security baseline for companies.