Principal Engineer, Cluster Orchestration

CoreWeaveSan Francisco, CA
17h$206,000 - $303,000Hybrid

About The Position

CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com . About the Role CoreWeave runs some of the largest GPU clusters in the world. The AI infrastructure behind those clusters determines how workloads are placed, how resources are shared, and how reliably systems perform under constant pressure. As a Principal Engineer in AI Infrastructure, you will lead the design and evolution of the cluster orchestration systems that make this possible. This includes Slurm, Kubernetes, SUNK, and the control planes that support AI training, inference, and model onboarding at scale. You will define long-term architecture, solve hard scaling problems, and set technical direction across teams. Your work will directly affect how quickly customers can run models, how efficiently we use GPUs, and how reliably the platform behaves at scale. What You’ll Do Architecture and Technical Direction Define the long-term architecture for CoreWeave’s orchestration platforms across Kubernetes, Slurm, SUNK, Kueue, and related systems. Act as a technical authority on scheduling, quota enforcement, fairness, pre-emption, and multi-tenant GPU isolation. Make design decisions that balance performance, reliability, cost, and operational complexity. Orchestration Platform Development Lead the evolution of Kubernetes-native control planes, including SUNK and custom operators. Design systems that support workload admission, validation, and rollout, including model onboarding flows. Identify and remove scaling limits across schedulers, control planes, registries, networking, and storage. Reliability and Operations Set standards for reliability, observability, and operational readiness across orchestration services. Define SLOs, alerting, and incident response practices for platform-critical systems. Ensure systems behave predictably during failures, peak load, and rapid growth. Hands-on Engineering Write and review production code for Kubernetes controllers, schedulers, admission logic, and internal tooling. Measure and improve scheduling latency, container startup time, image distribution, and cold-start performance. Lead architecture and design reviews across infrastructure teams. Leadership and Influence Mentor senior and staff engineers and help grow technical leaders. Influence platform, infrastructure, security, and product teams through clear technical judgment. Engage with customers and open-source communities on deep technical topics when needed. Why CoreWeave? At CoreWeave, AI infrastructure is the product. As a Principal Engineer in cluster orchestration, you will be responsible for systems that directly determine how efficiently GPUs are used, how reliably large models run, and how quickly customers can move from research to production. This role puts you at the center of hard problems in scheduling, resource isolation, and large-scale control planes. You will work on systems where small design choices affect thousands of GPUs and real customer workloads. If you care about building infrastructure that runs under constant pressure, scales without shortcuts, and enables the next generation of AI workloads, CoreWeave is a place where your work will matter.

Requirements

  • 15+ years of experience building and operating large-scale distributed systems.
  • Deep, practical knowledge of Kubernetes and Slurm internals.
  • Experience running GPU-heavy platforms for AI training, inference, or HPC workloads.
  • Strong background in Go and cloud-native systems development.
  • Proven ability to set technical direction across teams without direct authority.
  • Comfortable making high-impact technical decisions in complex systems.
  • Bachelor’s or Master’s degree in a relevant field, or equivalent experience.

Nice To Haves

  • Experience with systems such as Kueue, Kubeflow, Argo Workflows, Ray, Istio, or Knative.
  • Background in ML platform engineering, model onboarding, or lifecycle management.
  • Strong understanding of scheduling strategies, pre-emption, quota enforcement, and elastic scaling.
  • Track record of operating highly reliable systems with clear SLOs and incident processes.
  • Contributions to Kubernetes, ML infrastructure, or related open-source projects.
  • Experience mentoring senior engineers and raising engineering standards.

Responsibilities

  • Define the long-term architecture for CoreWeave’s orchestration platforms across Kubernetes, Slurm, SUNK, Kueue, and related systems.
  • Act as a technical authority on scheduling, quota enforcement, fairness, pre-emption, and multi-tenant GPU isolation.
  • Make design decisions that balance performance, reliability, cost, and operational complexity.
  • Lead the evolution of Kubernetes-native control planes, including SUNK and custom operators.
  • Design systems that support workload admission, validation, and rollout, including model onboarding flows.
  • Identify and remove scaling limits across schedulers, control planes, registries, networking, and storage.
  • Set standards for reliability, observability, and operational readiness across orchestration services.
  • Define SLOs, alerting, and incident response practices for platform-critical systems.
  • Ensure systems behave predictably during failures, peak load, and rapid growth.
  • Write and review production code for Kubernetes controllers, schedulers, admission logic, and internal tooling.
  • Measure and improve scheduling latency, container startup time, image distribution, and cold-start performance.
  • Lead architecture and design reviews across infrastructure teams.
  • Mentor senior and staff engineers and help grow technical leaders.
  • Influence platform, infrastructure, security, and product teams through clear technical judgment.
  • Engage with customers and open-source communities on deep technical topics when needed.

Benefits

  • Medical, dental, and vision insurance - 100% paid for by CoreWeave
  • Company-paid Life Insurance
  • Voluntary supplemental life insurance
  • Short and long-term disability insurance
  • Flexible Spending Account
  • Health Savings Account
  • Tuition Reimbursement
  • Ability to Participate in Employee Stock Purchase Program (ESPP)
  • Mental Wellness Benefits through Spring Health
  • Family-Forming support provided by Carrot
  • Paid Parental Leave
  • Flexible, full-service childcare support with Kinside
  • 401(k) with a generous employer match
  • Flexible PTO
  • Catered lunch each day in our office and data center locations
  • A casual work environment
  • A work culture focused on innovative disruption
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service