About The Position

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. NVIDIA is looking to hire a deeply technical, hands-on Principal Engineer to lead the security foundations for autonomous, self-evolving agents across the enterprise. This engineer is expected to be familiar with agentic AI concepts, sandboxed execution environments, and the security and safety layers required when agents generate and execute code while accessing internal and external data sources. You’ll partner closely with Cloud, AI/ML & Generative AI workforce, internal platform teams already building sandboxed environments for LLM-generated code execution, and cross-functional stakeholders including Legal, Security, and Agent Identity teams. Working in a multifaceted and agile environment, you will extend that foundation into a robust safety and security program for long-running, self-improving autonomous agents that refine their own behavior over time, with guardrails enforced at both build time and run time, deep observability and auditing, and continuous evaluation, unblocking teams and setting NVIDIA up for long-term success.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience).
  • 15+ years of industry experience building and securing large-scale systems, platforms, or infrastructure.
  • Proven ability to lead complex technical initiatives as a senior IC—setting direction, driving alignment, and delivering outcomes.
  • Strong understanding of security fundamentals: threat modeling, authentication/authorization, least privilege, secrets management, secure SDLC, and incident response.
  • Demonstrated experience with sandboxing / isolation technologies (containers, microVMs, Linux security primitives, policy enforcement, runtime controls).
  • Experience designing systems with strong observability and auditability (structured logs, traceability, metrics, security telemetry).
  • Familiarity with evaluation and benchmarking approaches for AI/ML systems, including designing tests, measuring behavioral drift, and maintaining safety invariants over time.
  • Solid programming and systems skills (e.g., Python, Go, or similar), and comfort working across stack boundaries when needed.
  • Ability to operate effectively in a fast-paced, multifaceted environment, with a bias toward action and delivery.

Nice To Haves

  • Experience securing agentic AI systems or LLM applications that use tools, execute code, or take autonomous actions, especially self-evolving agents that modify their own prompts, tools, or workflows.
  • Hands-on experience with technologies like Kubernetes, containers, workload isolation, policy engines, and runtime security.
  • Familiarity with enterprise developer workflows: CI/CD, artifact integrity, dependency/supply-chain security, and secure build pipelines.
  • Experience designing governance frameworks for emerging technologies—risk tiering, guardrails, rollout playbooks, and adoption enablement.
  • Background in continuous evaluation pipelines for AI systems, including automated red-teaming, regression testing, or safety benchmarking at scale as well as a strong intuition for balancing developer productivity with security and compliance, and the ability to build solutions developers actually want to use.

Responsibilities

  • Lead the end-to-end technical strategy and execution for securing autonomous agents across the enterprise, with a strong bias for enabling developer velocity.
  • Define agent security and safety requirements and translate them into scalable architectures, guardrails, and platform capabilities as well as extend existing sandbox foundations for LLM-generated code execution to support autonomous, tool-using agents and multi-step workflows.
  • Design and implement strong isolation, policy enforcement, and least-privilege access controls for agent runtimes and tool integrations.
  • Define and enforce build-time guardrails (policy gates, secure defaults, capability declarations) and run-time guardrails (behavioral boundaries, action allowlists, kill switches) that constrain what self-evolving agents can do as they adapt.
  • Build secure pathways for agents to access internal and external data sources, including secrets handling, data protection, and governance controls
  • Establish comprehensive observability and auditing infrastructure (structured logs, decision traces, drift detection, and security telemetry) to ensure agent actions are traceable, measurable, and operationally safe at scale
  • Design and operate a continuous evaluation framework that benchmarks agent behavior, detects capability drift, and validates that self-improving agents remain within approved safety and security envelopes.
  • Build a streamlined, developer-friendly experience to run autonomous agents securely—enabling easy onboarding and day-to-day use across both closed-source and open-source agents (e.g., Claude Code, Codex, OpenCode, Openclaw/Claws) with consistent guardrails, policies, and controls.
  • Drive cross-functional alignment and delivery with Cloud, AI/ML & Generative AI workforce, Legal, Security, Agent Identity, and internal platform teams.
  • Stay ahead of emerging agent threats and failure modes (particularly risks unique to self-evolving agents), and continuously evolve defenses, standards, and best practices for agent safety and security.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service