About The Position

We are now looking for an Applied Deep Learning Research Scientist, Efficiency! Join our ADLR – Efficiency team to make deep learning faster and consume less energy! Our team influences the next-generation hardware to make AI more efficient; we work on the Nemotron series of models to make our state-of-the-art deep learning models the most efficient OSS models out there; and we develop new technology, software and algorithms to optimize neural networks for training and deployment. Topics include quantization/sparsity/optimizers/reinforcement learning, efficient architectures and pre-training. Our team is located inside the Nemotron pre-training team, collaborating across the company to make Nvidia GPUs the most efficient AI platform possible. Our work quite literally reaches the entire deep learning world. We are looking for applied researchers that want to develop new technologies for efficiency - and who want to understand the ‘why’ in efficiency, getting to the root-cause of why things do or do not work, and using that knowledge to develop new algorithms, numeric formats and architecture improvements.

Requirements

  • PhD degree in AI, computer science, computer engineering, math or a related field or equivalent experience in some of the areas listed below can substitute for an advanced degree.
  • 5+ years of relevant industrial research experience.
  • Familiarity with state-of-art neural network architectures, optimizers and LLM training.
  • Experience with modern DL training frameworks and/or inference engines.
  • Fluency in Python, and solid coding/software-engineering practices
  • A proven track-record in publications and/or the ability to run large-scale experiments
  • A strong interest in neural network efficiency

Nice To Haves

  • Experience in quantization, pruning, numerics and efficient architectures.
  • A background in computer architecture
  • Experience with GPU computing, kernels, CUDA programming and/or performance analysis

Responsibilities

  • Research of low-bit number representations and pruning and their effect on neural network inference and training accuracy. This includes requirements by the existing state of art neural networks, as well as co-design of future neural network architectures and optimizers.
  • Innovate with new algorithms to make deep learning more efficient while retaining accuracy, and open-source or publish these algorithms for the world to use.
  • Run large-scale deep learning experiments to prove out ideas and analyze the effects of efficiency improvements.
  • Collaborate across the company with teams making the hardware, software and deep learning architectures.

Benefits

  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service