About The Position

NVIDIA's invention of the GPU 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, we are increasingly known as “the AI computing company”. We are looking for an AI & Deep Learning Compiler Engineer. NVIDIA is hiring software engineers for its Deep Learning & AI Compiler (DLC) team. Academic and commercial groups around the world are using GPUs to power a revolution in deep learning, enabling breakthroughs in many areas, e.g. large language models, generative AIs, recommendation systems, image classification, speech recognition, etc. With the rapid advancement of AI, our DLC has been the backbone of NVIDIA’s inference engine, spanning across data centers, personal devices, automotive, and robotics. The compiler must deliver leading inference performance, fast build time, reduced memory footprints, and ease of use in the forms of both Ahead-of-Time and Just-in-Time. Join the team building the DLC which will be used by the entire deep learning community.

Requirements

  • Bachelors, Masters or Ph.D. in Computer Science, Computer Engineering, related field or equivalent experience.
  • 3+ years of relevant work or research experience in performance analysis and compiler optimizations.
  • Experience with compiler technologies (e.g., MLIR, XLA, and LLVM etc.)
  • Excellent C/C++ and Python programming and software design skills, including debugging, performance analysis, and test design.
  • Ability to work independently, define project goals and scope, and lead your own development efforts.
  • Strong interpersonal skills are required along with the ability to work in a fast moving & dynamic product-oriented team.

Nice To Haves

  • Understanding of deep learning models, algorithms and frameworks, such as PyTorch, XLA etc.
  • Understanding of LLM inference optimizations and techniques.
  • GPU kernel generation with high performance and fast build time.
  • Proficient in GPU architecture.
  • CUDA or OpenCL programming experience.
  • Track record on new hardware bring-up is a plus.

Responsibilities

  • Develop compiler IR, programming model and optimizations for future GPU architectures.
  • Collaborating with members of the deep learning software framework teams and the hardware architecture teams to accelerate the next generation of deep learning software.
  • Scope of these efforts includes defining public APIs, performance optimizations and analysis, crafting and implementing compiler optimizations and kernel generation for neural networks, and other general software engineering work.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service