
Senior Application Security Engineer
Automate your job search with Sonara.
Submit 10x as many applications with less effort than one manual application.1
Reclaim your time by letting our AI handle the grunt work of job searching.
We continuously scan millions of openings to find your top matches.

Job Description
As a Senior Security Engineer within Platform Security at Datadog, you will play a vital role in securing our infrastructure for agentic applications. This role will be critical in establishing and enforcing robust security controls to mitigate risks associated with LLMs such as prompt injection, hallucinations etc. that can lead to unintended executions.
The ideal candidate will have a deep understanding of LLM threats and practical experience in network segmentation, implementing strict data access controls, and defining runtime hardening measures. You'll collaborate closely with platform teams to implement secure-by-default controls across widely used platforms, with a specific focus on multi-agent systems.
You will be at the forefront of ensuring the security of Datadog's AI platform and products by establishing standards, developing secure solutions, performing threat modeling, and remediating AI-specific vulnerabilities.
Responsibilities
- Design and implement solutions to harden agentic application infrastructure, reducing the impact of unintended execution.
- Enforce strict data access controls following least privilege principles to reduce the risk of unauthorized access.
- Implement and monitor robust runtime hardening measures to constrain and monitor agent actions, preventing unintended code execution, resource exhaustion, privilege escalation, and bypass of security controls.
- Define and implement security policies for agents and tools, outlining hardening requirements, data permissions, and service access.
- Work closely with platform teams to integrate security best practices throughout the multi-agent system's lifecycle.
- Perform threat modeling with engineering teams for new and existing AI products, focusing on emerging AI-specific threats.
- Prioritize and remediate LLM threats, such as prompt injection by developing and maintaining continuous testing frameworks for prompt injection vulnerabilities.
Who We're Looking For:
- 5+ years experience in software engineering or development within a collaborative setting, with a preference for Go and Python experience, and familiarity with LLM frameworks and protocols (A2A, MCP).
- Proven experience in security and/or infrastructure engineering, with a focus on distributed systems or agent-based architectures.
- Proven experience in implementing security controls within application infrastructure including zero-trust networking, runtime hardening and workload protection solutions.
- Familiar with common vulnerabilities, and mitigation techniques, particularly concerning LLM applications (OWASP Top 10 for LLM applications).
- Excellent problem-solving skills and the ability to work independently and as part of a team.
- Track record of successfully driving security initiatives with leadership and engineering buy-in.
- Stays current with the latest security best practices, technologies, and emerging threats, especially in the generative AI space.
Automate your job search with Sonara.
Submit 10x as many applications with less effort than one manual application.
