Principal Software Engineer, CoreAI

MicrosoftRedmond, WA
4d

About The Position

The CoreAI GPU Infrastructure team builds the foundational accelerated compute platforms that power largescale AI training and inference across Azure. Our mission is to deliver secure, reliable, and highly efficient GPU infrastructure that enables multitenant AI systems at global scale while maximizing utilization, performance, and developer productivity. This role sits at the intersection of cloud infrastructure, systems software, virtualization, and container platforms, working closely with CoreAI, Azure Infrastructure, OS, Networking, and Hardware teams to deliver end-to-end platform capabilities. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Requirements

  • Bachelor's Degree in Computer Science or related technical field and 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, Python or equivalent experience.
  • Proven ability to design and operate largescale, production infrastructure with high reliability and performance requirements.
  • Strong problem-solving skills and the ability to debug complex, cross layer systems issues.
  • Demonstrated technical leadership, including mentoring engineers and driving cross team architectural alignment.
  • Hands-on experience with virtualization and/or container platforms (e.g., VMs, Kubernetes, container runtimes).
  • Strong collaboration and communication skills, with the ability to work across organizational boundaries.

Nice To Haves

  • Familiarity with distributed training and inference stacks (e.g., NCCL style collectives, model/data parallelism).
  • Experience in building or operating multitenant AI platforms in cloud environments.
  • Familiarity with high performance networking and low latency communication stacks.
  • Familiarity with GPU accelerated computing (e.g., CUDA, GPU drivers, device plugins, or runtime integration).
  • Familiarity with GPU virtualization, passthrough, or partitioning technologies.
  • Knowledge of confidential computing, trusted execution environments, or hardware-backed isolation.

Responsibilities

  • Design and build GPU accelerated infrastructure for training and inference workloads, spanning bare metal, virtual machines, and containerized environments.
  • Develop systems for GPU device management, scheduling, isolation, and sharing (e.g., partial GPU allocation, multitenant usage).
  • Build and operate advanced orchestration and resource governance scenarios using platforms such as AKS, Dynamic Resource Allocation (DRA), and related Kubernetes ecosystem capabilities to enable fair sharing, isolation, and efficient utilization of accelerated resources.
  • Build and evolve virtualization and container stacks to support modern AI workloads, including secure and confidential compute scenarios.
  • Optimize performance, reliability, and utilization across large GPU fleets, including scaleup and scale out configurations.
  • Partner with networking and storage teams to enable high performance interconnects (e.g., RDMA/InfiniBand class networking) for distributed workloads.
  • Drive end-to-end platform features from design through production, including observability, diagnostics, and operational excellence.
  • Influence platform architecture and technical direction across teams through design reviews and technical leadership.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service