The compute infrastructure team runs the GPU fleet and large-scale compute clusters that serve the models backing ChatGPT and the API, while also supporting training workloads for our next generation models. We operate a large, modern GPU fleet and provide a unified platform for other OpenAI teams to seamlessly run production Applied AI and Research training workloads. We seek to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. Safety is more important to us than unfettered growth. You will be part of an engineer-first TPM team as a Technical Program Manager for Compute Infrastructure who owns the end-to-end delivery of large-scale GPU clusters, partnering with engineers to bring clusters online across external providers and partners. You’ll run a broad, parallel portfolio spanning hardware, networking, power, and cooling—driving execution, risk management, and crisp alignment from working teams through leadership to deliver production-ready capacity at scale. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed