About The Position

The Senior AI Security Engineer will help establish and strengthen the security foundation for the Cargill’s modern business applications by securing and operating a scalable, enterprise AI Platform. In this role, you will apply deep expertise in cybersecurity, cloud platforms, and AI engineering to protect AI capabilities used by data and application teams to deliver measurable business value. You will play a key role in enabling safe adoption of Generative AI, Large Language Models (LLMs), and Agentic AI systems, while ensuring security, resilience, and compliance are built in by design. The role also includes coaching and mentoring junior engineers, driving automationfirst security practices, and delivering highly scalable AI security solutions across the enterprise

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Cybersecurity, Artificial Intelligence / Machine Learning, or a related field.
  • 6+ years of experience in cybersecurity engineering, with demonstrated ownership of complex, enterprise-scale systems.
  • Proven experience addressing the unique security challenges of Large Language Models (LLMs), Generative AI, or Agentic AI systems, including model misuse, prompt injection, hallucination risk, and data governance.
  • Strong understanding of cloud security concepts across AWS, Azure, and/or GCP, including IAM, networking, encryption, logging, and monitoring.
  • Hands-on experience with Python and security automation for scalable enforcement of AI security controls.
  • Experience securing containerized and distributed systems, including Kubernetes and service meshes.
  • Familiarity with AI platform components such as model registries, vector databases, RAG pipelines, fine-tuning workflows, and inference gateways

Responsibilities

  • Develop and maintain security and resilience for enterprisegrade AI and Generative AI services, supporting businesscritical use cases at scale.
  • Secure Large Language Models (LLMs), Agentic AI systems, and multiagent workflows, including inference services, finetuning pipelines, and runtime execution environments.
  • Implement security controls for Model Context Protocol (MCP) and similar agent integration patterns, protecting tool invocation, context exchange, agenttoagent communication, and external data access.
  • Develop and manage AI platform security capabilities across public cloud AI services, selfhosted models, and thirdparty AI providers.
  • Build automated security guardrails using Python and policyascode to enforce access control, data protection, model usage limits, and runtime monitoring.
  • Partner closely with AI Platform, Cloud Platform, DevOps, Software Engineering, and Compliance teams to embed security into the AI development lifecycle (AISDLC).
  • Perform threat modeling and risk assessments for AI architectures, including RAG pipelines, vector databases, agent frameworks, and model supply chains.
  • Support AIrelated incident response, including investigation, containment, remediation, and lessons learned for security incidents involving AI systems or agents.
  • Contribute to enterprise AI security standards, reference architectures, and best practices.
  • Act as a senior technical voice, mentoring junior engineers and influencing stakeholders across engineering and leadership teams
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service