Safety Company News
Get Workers

The cybersecurity industry has spent three decades building tools for a world that no longer exists. Firewalls, intrusion detection systems, endpoint protection platforms, vulnerability scanners — the entire apparatus of modern security was designed for a particular kind of infrastructure: static servers running known software, connected through well-defined network boundaries, attacked by humans operating on human timescales. That infrastructure is disappearing. What is replacing it does not resemble it in any meaningful way.

The systems that organizations depend on today are distributed, ephemeral, and increasingly autonomous. Containers spin up and die in seconds. Microservices communicate across dozens of internal APIs that change with every deployment. Infrastructure is defined in code, provisioned automatically, and scaled without human involvement. The attack surface is not a perimeter — it is the entire dependency graph, every configuration file, every service account permission, every API endpoint exposed by every service in every environment.

And yet the tools defending this infrastructure still operate on the same fundamental principle they always have: pattern matching. They maintain databases of known vulnerabilities. They compare observed behavior against signatures of known attacks. They flag deviations from predefined rules. When a new vulnerability is discovered, it gets a CVE number, a severity score, and a signature that security tools can match against. The entire system depends on the assumption that threats can be catalogued and recognized.

That assumption was always imperfect. Now it is becoming untenable.

The gap

The problem is not that signature-based security tools are poorly built. Many of them are excellent at what they do. The problem is that the nature of the threat has changed, and pattern matching — no matter how sophisticated — cannot keep pace.

Consider how a modern vulnerability actually works. A developer adds a dependency to a project. That dependency has its own dependencies, which have their own dependencies. Somewhere in that chain, a library makes an assumption about how input data is structured. That assumption holds in most contexts but not all of them. In the specific configuration of this particular service, running in this particular environment, with this particular set of permissions, the assumption fails in a way that allows an attacker to escalate privileges. The vulnerability is not in the code — it is in the interaction between the code, the configuration, the environment, and the permissions. No signature can capture that, because it is not a pattern. It is a consequence of how a specific system is assembled.

This is where the industry's approach breaks down. A vulnerability scanner can tell you that a library has a known CVE. It cannot tell you whether that CVE is actually exploitable in your specific environment. It cannot reason about the chain of conditions that would need to be true for an attacker to leverage it. It cannot understand that a medium-severity vulnerability in one context becomes critical when combined with a particular IAM configuration and a specific network topology.

The result is a paradox that every security team knows intimately: they are simultaneously overwhelmed with alerts and underinformed about actual risk. The scanner flags hundreds of vulnerabilities. The team triages them by severity score. But severity scores are generic — they describe the theoretical maximum impact of a vulnerability, not its actual exploitability in a specific environment. So the team spends its time investigating vulnerabilities that turn out to be irrelevant, while the ones that matter are buried in the noise.

What Claeth is

Claeth is an autonomous cybersecurity system that we are building to address this gap. Where traditional tools match patterns, Claeth reasons about systems. Where scanners report what is known to be vulnerable, Claeth attempts to understand how a specific system can actually fail.

The core idea is that security analysis should work the way a skilled human analyst works — but at the speed and scale that modern infrastructure requires. A skilled analyst does not simply look up CVE numbers. They read the code. They trace the data flow. They consider the deployment environment, the permissions model, the network topology. They think about what an attacker would actually need to do, step by step, to exploit a weakness. They build a mental model of the system and then reason about how that model can be broken.

This is what we are training Claeth to do. Not to match patterns against a database, but to construct a contextual understanding of a system and then reason about its failure modes. The distinction matters enormously. Pattern matching is reactive — it can only find what it already knows about. Contextual reasoning is generative — it can identify vulnerabilities that have never been seen before, because it understands the principles by which systems fail, not just the specific instances that have been catalogued.

The technical approach

Building a system that reasons about security requires solving several interconnected problems.

The first is representation. Before Claeth can reason about a system, it needs a model of that system — not a static inventory of assets, but a dynamic graph that captures how components interact, what data flows between them, what permissions govern access, and how the system's behavior changes under different conditions. We are developing methods to construct these representations automatically from code, configuration, and runtime telemetry, and to keep them current as the system evolves.

The second is vulnerability reasoning. Given a representation of a system, Claeth needs to identify the conditions under which it can fail. This is fundamentally different from looking up known vulnerabilities. It requires understanding categories of failure — how authentication can be bypassed, how input validation can be circumvented, how privilege escalation works in practice — and then applying that understanding to the specific architecture of the system being analyzed. We are training models on large corpora of vulnerability data, security research, and exploit analysis, with the goal of developing general reasoning capabilities about how systems break.

The third is contextual prioritization. Not every theoretical vulnerability is a practical risk. Claeth needs to assess exploitability in context — considering the actual deployment environment, the network configuration, the permissions model, the availability of attack prerequisites. A vulnerability that requires local access is not the same risk in an internet-facing service as it is in an internal tool running behind a VPN. We are building models that can make these distinctions automatically, producing risk assessments that reflect the actual threat landscape of a specific system rather than generic severity scores.

The fourth is continuous monitoring. Infrastructure is not static, and neither is the threat landscape. Code changes with every deployment. Dependencies are updated. Configurations drift. New attack techniques are published. Claeth is designed to operate continuously — re-evaluating its analysis as the system changes, and surfacing new risks as they emerge rather than waiting for a scheduled scan. The challenge here is computational: continuously reasoning about complex systems at the depth required for meaningful security analysis is expensive, and we are working on methods to make it tractable at scale.

What this is not

It is worth being explicit about what Claeth is not, because the cybersecurity industry has a long history of overclaiming.

Claeth is not a replacement for all existing security tools. Signature-based detection still has value for known threats. Firewalls still matter. Access controls, encryption, secure development practices — none of these become unnecessary because an AI system is analyzing your infrastructure. Claeth is designed to address a specific gap: the inability of current tools to reason contextually about complex, dynamic systems. It is additive, not substitutive.

Claeth is also not infallible. AI systems make mistakes. They hallucinate. They miss things. They sometimes confidently identify risks that do not exist, and sometimes fail to identify risks that do. We are working to minimize these failure modes, but eliminating them entirely is not a realistic goal. The question is whether Claeth's contextual analysis provides meaningful value above what teams can achieve with existing tools and limited human resources — and we believe the answer is yes, even with imperfect accuracy.

Finally, Claeth is not a finished product. It is a research effort that is working toward a production system. The problems we are trying to solve — contextual vulnerability reasoning, automated system modeling, continuous security analysis — are genuinely hard. We are making progress, but we are not going to pretend that progress is completion.

Why this matters now

The urgency is not hypothetical. The infrastructure that organizations are building today is more complex, more interconnected, and more autonomous than anything that has come before. AI systems are being deployed to manage critical processes. Automated pipelines handle sensitive data without human oversight. The blast radius of a security failure in these systems is not a data breach — it is a loss of control over autonomous processes that affect real outcomes in the physical world.

The security tools that protected the previous generation of infrastructure cannot protect this one. They were designed for a world where systems changed slowly, threats were catalogued, and humans had time to investigate alerts. None of those assumptions hold anymore. The industry needs security systems that can reason about complexity at machine speed — that can understand not just what a vulnerability is, but what it means in the context of a specific, living system.

That is what we are building.

Current status

Claeth is in active development. Our security research team is focused on the core technical challenges: building accurate system representations from heterogeneous data sources, training models that can reason about vulnerability patterns in context, and developing the continuous analysis pipeline that will allow Claeth to operate in real time against production infrastructure.

We are not accepting users, and we have not set a public timeline for availability. The problems we are working on do not have shortcuts, and we would rather take the time to build something that works than rush to market with something that does not. We will share our research as it matures, and we will announce availability when the system meets our internal standards for reliability and accuracy.

If you are working on related problems in infrastructure security, or if you are a security team dealing with the challenges described here, we are interested in hearing from you — not as potential customers, but as practitioners whose experience can inform what we build.

Get in touch