Safety Company News
Get Workers

AI/ML Engineer

Research · Remote


← Careers

Prion classifies adversarial inputs in under two milliseconds. Claeth detects manipulation patterns across multi-turn conversations before they escalate. Both products depend on models that were designed for security from the start — not general-purpose architectures adapted after the fact. This role is about building those models.

What you'll work on

The core challenge is inference-time classification under adversarial conditions. Attackers are not static — they iterate, probe, and adapt. The models you build need to handle distribution shift gracefully, flag novel attack patterns without flooding operators with false positives, and do all of this at latencies that don't degrade the systems they're protecting.

You'll work on model distillation — taking large research models and compressing them into production-viable architectures that run at the edge. You'll design training pipelines around proprietary security datasets that can't be found on the open internet. You'll build evaluation harnesses that test not just accuracy but robustness: how does the model behave when someone is actively trying to break it?

This sits at the intersection of machine learning and security research. Understanding both domains deeply — or being willing to — is what makes this role distinct. The threat landscape moves fast, and the models need to move faster.

What we're looking for

Someone who has trained and shipped models, not just fine-tuned them. You understand why a model fails, not just that it does. You've dealt with noisy labels, class imbalance, and the gap between validation metrics and production performance. You read papers not to cite them but to decide whether the technique is worth implementing.

Familiarity with adversarial ML — evasion attacks, poisoning, model extraction — matters here more than familiarity with any particular framework. If you've spent time thinking about how models break under pressure, that's directly relevant. Experience with Go or Rust for inference systems is useful but not required; you'll have infrastructure engineers to work alongside.

We care less about years of experience than about the depth of your understanding and the quality of what you've built.

How to apply

Email [email protected] with the subject line "AI/ML Engineer." Include your resume, a link to any relevant work (papers, repos, technical writing), and a short note on what problem in AI security interests you most and how you'd approach it.