About The Position

NVIDIA is seeking a System Architect to lead rack-level and platform pathfinding for our next-generation GPU and LPU systems! Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. What you will be doing: In this role you will help shape how our accelerators integrate at rack and system level, working with mechanical, electrical, SI/PI, networking, and data center teams to deliver high-performance, reliable AI platforms.

Requirements

  • BS in Electrical Engineering, Mechanical Engineering, Computer Engineering, or related field (or equivalent experience); MS/PhD preferred.
  • 8+ years in server, storage, networking, or rack-level hardware, including several years in system or platform architecture and pathfinding.
  • Strong experience with mechanical aspects of data center hardware: rack structures, packaging, air or liquid cooling, cabling, and serviceability.
  • Experience owning complex cross-functional decisions and enjoys working at rack scale.
  • Strong background in electrical architecture and PDN: rack and board power trees, redundancy, protections, and high-speed interfaces/SI-PI fundamentals.
  • Solid understanding of data center infrastructure: rack power distribution, network fabrics, structured cabling, grounding, safety, and regulations.
  • Proven ability to lead cross-functional decisions, explain tradeoffs clearly, and deliver architectures under ambiguity and tight schedules.

Nice To Haves

  • Designed GPU, accelerator, or other high-power AI systems, including multi-GPU/LPU nodes and dense rack solutions.
  • Defined rack-level networking architectures such as leaf/spine fabrics, TOR strategies, and structured cabling.
  • Owned end-to-end SI/PI for complex platforms, from budgeting and simulations through lab correlation and signoff.
  • Led platform bring-up and validation at scale, correlating architecture assumptions with measured behavior.
  • Acted as a technical leader: mentoring engineers, scaling processes and tools for rack architecture, and contributing to standards, publications, or patents in server or rack design, SI/PI, power delivery, or thermal and mechanical design.

Responsibilities

  • Lead pathfinding for system and rack architecture for new GPU/LPU platforms, building on existing NVIDIA architectures and improving them where it matters most.
  • Tailor system-level solutions to improve performance, efficiency, and scalability for LPU-based systems and workloads.
  • Define rack, node, and subsystem requirements across power, cooling, mechanics, SI/PI budgets, connectivity, management, and reliability.
  • Collaborate with mechanical, electrical, SI/PI, networking, firmware, and data center operations teams to converge on practical architectures and execution plans.
  • Drive SI/PI, power, and thermal feasibility through focused modeling, simulation, and experiments, then feed results back into designs.
  • De-risk new architectures with targeted prototypes and experiments, and document clear specs, interfaces, and guidelines so teams can move fast.

Benefits

  • With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers.
  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service