Member of Technical Staff, Inference & Serving

InceptionSan Francisco, CA
12d

About The Position

We're looking for engineers and scientists to design, optimize, and scale the systems that power our diffusion LLMs in production. Your work will make inference faster, more cost-effective, and more reliable.

Requirements

  • BS/MS/PhD in Computer Science, Engineering, or a related field (or equivalent experience).
  • Knowledge of ML serving frameworks (SGLang, vLLM, Triton Inference Server, TensorRT-LLM).
  • Understanding of ML frameworks (PyTorch, TensorFlow) from a systems perspective.
  • Familiarity with high-performance computing and GPU programming (CUDA).
  • Experience with containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines.
  • Background in performance optimization and profiling of ML systems.

Nice To Haves

  • Experience building and maintaining large-scale language models with tens of billions of parameters or more.
  • Experience with distributed systems and cloud computing platforms (AWS/GCP/Azure).
  • Experience with ML workflow orchestration tools (Kubeflow, Airflow).
  • Experience with model optimization techniques (quantization, distillation, speculative decoding, continuous batching).
  • Knowledge of ML-specific infrastructure challenges (checkpointing, resource scheduling, etc.).

Responsibilities

  • Build and optimize high-performance model serving systems for low-latency inference of diffusion LLMs.
  • Extend orchestration frameworks (Kubernetes, Ray, SLURM) for distributed inference, evaluation, and large-batch serving.
  • Implement and manage load balancing, autoscaling, and traffic routing for model endpoints.
  • Build systems for model versioning, canary deployments, and zero-downtime rollouts.
  • Develop monitoring, alerting, and observability tooling to ensure SLA compliance and rapid incident response.
  • Collaborate with ML researchers to translate model advances (new architectures, quantization techniques, batching strategies) into production-ready serving improvements.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service