About The Position

Reflection’s mission is to build open superintelligence and make it accessible to all . We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond. Bridge the gap between research and production by turning cutting-edge algorithms into scalable training systems. You will design and optimize the core infrastructure behind frontier AI models — from reinforcement learning training loops and distributed GPU training to massive-scale data pipelines. Our systems train models across thousands of GPUs and process petabyte-scale datasets. We care deeply about numerical stability, throughput, and reproducibility. This team owns and evolves the core infrastructure behind our training systems. We focus on: Reinforcement learning training infrastructure Distributed training and inference systems Experiment infrastructure and reproducibility Large-scale data pipelines The goal is to build the engineering foundation that allows researchers to iterate quickly while training models at massive scale. You will architect and optimize the core training infrastructure that powers our models. This includes RL training loops, distributed GPU systems, and large-scale data pipelines. You will work closely with researchers to transform new ideas into reliable, scalable training systems. We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.

Requirements

  • You are a strong software engineer who speaks the language of machine learning.
  • You may not have a PhD, but you know how to implement a research paper.
  • You have deep experience in at least one of the following: Distributed Training & Inference or Data Infrastructure
  • You enjoy working at the boundary between: Machine learning algorithms Distributed systems High-performance computing
  • You care deeply about performance, numerical stability, and reproducibility.
  • You thrive in high-agency environments and enjoy solving hard technical problems.

Responsibilities

  • Designing and optimizing large-scale training loops and data pipelines.
  • Implementing state-of-the-art techniques and ensuring they are numerically stable and computationally efficient.
  • Building internal tooling for launching, monitoring, and reproducing complex experiments.
  • Diagnosing deep bottlenecks across the training stack (GPU memory issues, communication overhead, dataloader stalls).
  • Translating research prototypes into reusable, production-grade infrastructure.

Benefits

  • Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.
  • Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.
  • Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.
  • Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time.
  • Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service