Sr. Fellow Machine Learning Engineer

Advanced Micro Devices, IncSan Jose, CA
17hHybrid

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. We are looking for a Fellow/Sr. Fellow Machine Learning Engineer to join our Training At Scale team. If you are excited by the challenge of distributed training of large models on a large number of GPUs, and if you are passionate about improving training efficiency while innovating and generating new ideas, then this role is for you. You will be part of a world class team focused on addressing the challenge of training generative AI. The ideal candidate should have experience with distributed training pipelines, be knowledgeable in distributed training algorithms (Data Parallel, Tensor Parallel, Pipeline Parallel, Expert Parallel), and be familiar with training large models.

Requirements

  • Strong background in machine learning, distributed systems, or AI infrastructure.
  • Proven experience building and optimizing distributed training systems for large models.
  • Proficiency in Python and C++, including performance profiling, debugging, and large-scale optimization.
  • Excellent communication, and problem-solving skills.
  • Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field.

Nice To Haves

  • Prefer experience in both model and application-level development and optimization.
  • Strong familiarity with ML frameworks (PyTorch, JAX, TensorFlow) and distributed frameworks (TorchTitan, Megatron-LM).
  • Hands-on expertise with LLMs, recommendation systems, or ranking models.
  • Experience collaborating across hardware, compiler, and system software layers.

Responsibilities

  • Train large models to convergence on AMD GPUs at scale.
  • Improve the end-to-end training pipeline performance on large scale GPU cluster.
  • Improve the end-to-end debuggability on large scale GPU cluster.
  • Design and optimize the distributed training pipeline and software stack to scale out.
  • Contribute your changes to open source.
  • Stay up-to-date with the latest training algorithms/frameworks.
  • Influence the direction of AMD AI platform.
  • Collaborate across teams with various groups and stakeholders.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service