About The Position

We seek experienced backend engineers to own the systems that serve our diffusion LLMs in production. You'll build and operate infrastructure that handles billions of inference requests — optimizing for latency, throughput, cost, and reliability. This role sits at the intersection of ML systems and backend infrastructure.

Requirements

  • BS/MS/PhD in Computer Science or a related field (or equivalent experience).
  • 5+ years of experience building production backend systems.
  • Strong proficiency in Python, including async programming and concurrent systems.
  • Solid understanding of distributed systems, networking, and load balancing at scale.
  • Familiarity with Kubernetes, CI/CD pipelines, and cloud infra (AWS and/or Azure).

Nice To Haves

  • Experience serving LLMs or other large generative models in production at scale.
  • Experience with cloud infrastructure (AWS, Azure), including GPU instance management and cost optimization.
  • Experience with infrastructure as code tools (Terraform) and deployment automation.
  • Experience with monitoring and observability tools (Prometheus, Grafana).
  • Familiarity with model serving frameworks (vLLM, Triton Inference Server, TensorRT-LLM).

Responsibilities

  • Design, build, and operate scalable backend services and model serving infrastructure for our diffusion LLMs.
  • Implement and manage load balancing, autoscaling, and traffic routing for model endpoints.
  • Build systems for model versioning, canary deployments, and zero-downtime rollouts.
  • Develop monitoring, alerting, and observability tooling to ensure SLA compliance and rapid incident response.
  • Benchmark and evaluate serving frameworks and hardware configurations to inform infrastructure decisions.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service