CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com . About the Role CoreWeave runs some of the largest GPU clusters in the world. The AI infrastructure behind those clusters determines how workloads are placed, how resources are shared, and how reliably systems perform under constant pressure. As a Principal Engineer in AI Infrastructure, you will lead the design and evolution of the cluster orchestration systems that make this possible. This includes Slurm, Kubernetes, SUNK, and the control planes that support AI training, inference, and model onboarding at scale. You will define long-term architecture, solve hard scaling problems, and set technical direction across teams. Your work will directly affect how quickly customers can run models, how efficiently we use GPUs, and how reliably the platform behaves at scale. What You’ll Do Architecture and Technical Direction Define the long-term architecture for CoreWeave’s orchestration platforms across Kubernetes, Slurm, SUNK, Kueue, and related systems. Act as a technical authority on scheduling, quota enforcement, fairness, pre-emption, and multi-tenant GPU isolation. Make design decisions that balance performance, reliability, cost, and operational complexity. Orchestration Platform Development Lead the evolution of Kubernetes-native control planes, including SUNK and custom operators. Design systems that support workload admission, validation, and rollout, including model onboarding flows. Identify and remove scaling limits across schedulers, control planes, registries, networking, and storage. Reliability and Operations Set standards for reliability, observability, and operational readiness across orchestration services. Define SLOs, alerting, and incident response practices for platform-critical systems. Ensure systems behave predictably during failures, peak load, and rapid growth. Hands-on Engineering Write and review production code for Kubernetes controllers, schedulers, admission logic, and internal tooling. Measure and improve scheduling latency, container startup time, image distribution, and cold-start performance. Lead architecture and design reviews across infrastructure teams. Leadership and Influence Mentor senior and staff engineers and help grow technical leaders. Influence platform, infrastructure, security, and product teams through clear technical judgment. Engage with customers and open-source communities on deep technical topics when needed. Why CoreWeave? At CoreWeave, AI infrastructure is the product. As a Principal Engineer in cluster orchestration, you will be responsible for systems that directly determine how efficiently GPUs are used, how reliably large models run, and how quickly customers can move from research to production. This role puts you at the center of hard problems in scheduling, resource isolation, and large-scale control planes. You will work on systems where small design choices affect thousands of GPUs and real customer workloads. If you care about building infrastructure that runs under constant pressure, scales without shortcuts, and enables the next generation of AI workloads, CoreWeave is a place where your work will matter.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Principal