This role will support the fleet infrastructure team at OpenAI. The fleet team focuses on running the world's largest, most reliable, and frictionless GPU fleet to support OpenAI's general purpose model training and deployment. Work on this team ranges from maximizing GPUs doing useful work by building user-friendly scheduling and quota systems, running a reliable and low maintenance platform by building push-button automation for kubernetes cluster provisioning and upgrades, supporting research workflows with service frameworks and deployment systems, ensuring fast model startup times through high performance snapshot delivery across blob storage down to hardware caching, and much more! As an engineer within Fleet infrastructure, you will design, write, deploy, and operate infrastructure systems for model deployment and training on one of the world's largest GPU fleet. The scale is immense, the timelines are tight, and the organization is moving fast; this is an opportunity to shape a critical system in support of OpenAI's mission to advance AI capabilities responsibly.