TL, Research Inference

OpenAISan Francisco, CA
1d

About The Position

In this role, you will build the systems that enable advanced AI models to run efficiently at scale. You will operate at the intersection of model research and systems engineering, translating new architectural ideas into high-performance inference systems that surface real tradeoffs in performance, memory, and scalability. Your work will directly influence how models are designed, evaluated, and iterated on across the research organization. By developing and evolving high-performance inference infrastructure, you will enable researchers to explore new ideas with a clear understanding of their computational and systems implications. This is not a product-serving role. Instead, it is a research-enabling systems role focused on performance, correctness, and realism - ensuring that AI research is grounded in what can actually scale.

Requirements

  • Have experience building production inference systems, not just training or running models.
  • Are comfortable with GPU-centric performance engineering, including memory behavior and latency/throughput tradeoffs.
  • Have worked on multi-GPU or distributed systems involving batching, scheduling, or runtime coordination.
  • Can reason end-to-end about inference pipelines, from request handling through execution and output streaming.
  • Are able to understand research ideas and implement them within real system and performance constraints.
  • Enjoy solving hard, ambiguous systems problems that only emerge at scale.
  • Prefer hands-on technical ownership and execution over abstract design work.

Responsibilities

  • Design and build high-performance inference runtimes for large-scale AI models, with a focus on efficiency, reliability, and scalability.
  • Own and optimize core execution paths, including model execution, memory management, batching, and scheduling.
  • Develop and improve distributed inference across multiple GPUs, including parallelism strategies, communication patterns, and runtime coordination.
  • Implement and optimize inference-critical operators and kernels informed by real-world workloads.
  • Partner closely with research teams to ensure new model architectures are supported accurately and efficiently in inference systems.
  • Diagnose and resolve performance bottlenecks through profiling, benchmarking, and low-level debugging.
  • Contribute to observability, correctness, and reliability of large-scale AI systems.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1-10 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service