Principal GPU Performance Engineer - Artificial Intelligence

Advanced Micro Devices, IncAustin, TX
145d

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences – from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.

Requirements

  • Strong expertise in GPU tuning and optimization (CUDA, ROCm, or equivalent).
  • Understanding of GPU microarchitecture (execution units, memory hierarchy, interconnects, warp scheduling).
  • Hands-on experience with distributed training frameworks and communication libraries (e.g., PyTorch DDP, DeepSpeed, Megatron-LM, NCCL/RCCL, MPI).
  • Advanced Linux OS, container (e.g. Docker) and GitHub skills.
  • Proficiency in Python or C++ for performance-critical development.
  • Familiarity with large-scale AI training infrastructure (NVLink, InfiniBand, PCIe, cloud/HPC clusters).
  • Experience in benchmarking methodologies, performance analysis/profiling (e.g. Nsight), performance monitoring tools.
  • Experience scaling training to thousands of GPUs for foundation models a plus.
  • Strong track record of optimizing large-scale AI systems in cloud or HPC environments is desired.

Responsibilities

  • Profile and optimize large-scale AI training workloads (transformers, multimodal, diffusion, recommender systems) across multi-node, multi-GPU clusters.
  • Identify bottlenecks in compute, memory, interconnects, and communication libraries (NCCL/RCCL, MPI), and deliver optimizations to maximize scaling efficiency.
  • Collaborate with compiler/runtime teams to improve kernel performance, scheduling, and memory utilization.
  • Develop and maintain benchmarks and traces representative of foundation model training workloads.
  • Provide performance insights to AMD Instinct GPU architecture teams, informing hardware/software co-design decisions for future architectures.
  • Partner with framework teams (PyTorch, JAX, TensorFlow) to upstream performance improvements and enable better scaling APIs.
  • Present findings to cross-functional teams and leadership, shaping both software and hardware roadmaps.

Benefits

  • AMD benefits at a glance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service