About The Position

We are seeking a Senior DevOps Engineer to design, deploy, and operate the next generation of Inflection AI’s cloud and AI infrastructure. This role sits at the intersection of AI research and production systems, owning the reliability, scalability, and performance of GPU-enabled platforms that power large-scale LLM training and inference. You will work across Azure and AWS to build highly automated, observable, and resilient infrastructure supporting low-latency AI applications in production.

Requirements

  • 5+ years of hands-on experience in DevOps, Site Reliability Engineering, or ML Infrastructure supporting high-scale, production systems.
  • Deep expertise in Azure and AWS, including storage, compute, networking, databases, and cloud-native monitoring services.
  • Strong Kubernetes administration experience, including GPU scheduling, operator deployment, and management of core infrastructure components; experience with Slurm is highly desirable.
  • Proven experience deploying, scaling, and operating Large Language Models (LLMs) and inference engines such as vLLM, TGI, or Triton.
  • Strong experience with modern DevOps tooling: Terraform, Helm, Kustomize, ArgoCD, GitHub Actions or GitLab CI, Prometheus, Grafana, and Clickhouse.
  • Advanced scripting and automation skills in Python and Bash, with the ability to debug complex distributed systems and optimize performance at scale.
  • Demonstrated ability to troubleshoot LLM servers, Kubernetes workloads, GPU utilization, and cloud infrastructure bottlenecks.
  • Have a bachelor’s degree or equivalent in a related field to the offered position requirements.

Responsibilities

  • Architect, deploy, and operate large-scale LLM inference servers and AI applications with a focus on low latency, high availability, and production reliability.
  • Design, provision, and maintain complex cloud architectures across Azure and AWS, including storage, compute, networking, databases, and native LLM services.
  • Manage GPU-enabled Kubernetes clusters and Slurm-based HPC environments, optimizing resource allocation for AI training and inference workloads.
  • Deploy and operate core Kubernetes infrastructure components and operators (GPU operators, ingress controllers, service meshes, CNIs, CSIs, and storage drivers).
  • Build scalable infrastructure-as-code and deployment workflows using Terraform, Helm, Kustomize, ArgoCD, and GitOps best practices.
  • Design and maintain centralized observability systems using Prometheus, Grafana, Clickhouse, and cloud-native monitoring tools.
  • Participate in on-call rotations, lead incident response, perform post-mortems, and continuously improve system reliability and SLAs.

Benefits

  • Diverse medical, dental and vision options
  • 401k matching program
  • Unlimited paid time off
  • Parental leave and flexibility for all parents and caregivers
  • Support of country-specific visa needs for international employees living in the Bay Area
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service