About The Position

NVIDIA's invention of the GPU 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, we are increasingly known as “the AI computing company”. We are looking for versatile software engineers for our XLA team. NVIDIA is at the center for the AI revolution that's transforming how people live, work, and interact with technology. Come join us to build high-performance, production-grade software that's at the core of next-generation AI systems. What you will be doing: In this role, develop compiler optimization algorithms for deep learning workloads. You will optimize inference and training performance for the JAX framework and the OpenXLA compiler on NVIDIA GPUs at scale. You’ll collaborate with our partners in deep learning framework teams and our hardware architecture teams to accelerate the next generation of deep learning software. The scope of these efforts include: Crafting and implementing compiler optimization techniques for deep learning network graphs. Designing novel graph partitioning and tensor sharding techniques for distributed training and inference. Performance tuning and analysis. Code-generation for NVIDIA GPU backends using open-source compilers such as MLIR, LLVM and OpenAI Triton. Designing user facing features in JAX and related libraries and other general software engineering work. Working closely with GPU hardware engineering teams to design AI compiler software features for next-generation GPUs.

Requirements

  • Bachelors, Masters or Ph.D. in Computer Science, Computer Engineering, related field (or equivalent experience).
  • 4+ years of relevant work or research experience in performance analysis and compiler optimizations.
  • Ability to work independently, define project goals and scope, and lead your own development effort adopting clean software engineering and testing practices.
  • Excellent C/C++ programming and software design skills, including debugging, performance analysis, and test design.
  • Strong foundation in architecture of CPU, GPUs or other high performance hardware accelerators.
  • Knowledge of high-performance computing and distributed programming.
  • CUDA or OpenCL programming experience is desired but not required.
  • Strong interpersonal skills are required along with the ability to work in a dynamic product-oriented team.

Nice To Haves

  • Experience with the following technologies is a huge plus: XLA, TVM, MLIR, LLVM, OpenAI Triton, deep learning models and algorithms, and deep learning framework design.
  • A history of mentoring junior engineers and interns is a bonus.
  • Experience working deep learning frameworks such as JAX, PyTorch or TensorFlow.
  • Extensive experience with CUDA or with GPUs in general.
  • Experience with open-source compilers such as XLA, LLVM, MLIR or TVM.

Responsibilities

  • Crafting and implementing compiler optimization techniques for deep learning network graphs.
  • Designing novel graph partitioning and tensor sharding techniques for distributed training and inference.
  • Performance tuning and analysis.
  • Code-generation for NVIDIA GPU backends using open-source compilers such as MLIR, LLVM and OpenAI Triton.
  • Designing user facing features in JAX and related libraries and other general software engineering work.
  • Working closely with GPU hardware engineering teams to design AI compiler software features for next-generation GPUs.

Benefits

  • competitive salaries
  • generous benefits package
  • equity
  • benefits

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Ph.D. or professional degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service