Software Development Engineer – Distributed Inference

Advanced Micro Devices, IncAustin, TX
14h

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: AMD is looking for a software engineer who is passionate about Distributed Inferencing on AMD GPUs and improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology THE PERSON: We are seeking a software engineer with strong technical expertise in C++/ Python development, solving performance and investigating scalability on multi-GPU, multi-node clusters. He is also passionate about quality assurance, benchmarking, and automation in the AI/ML space. The ideal candidate thrives in both collaborative and independent environments, demonstrates excellent problem-solving skills, and takes ownership in defining goals and delivering impactful solutions.

Requirements

  • Strong technical expertise in C++/ Python development, solving performance and investigating scalability on multi-GPU, multi-node clusters.
  • Passionate about quality assurance, benchmarking, and automation in the AI/ML space.
  • Thrives in both collaborative and independent environments
  • Demonstrates excellent problem-solving skills
  • Takes ownership in defining goals and delivering impactful solutions.
  • Strong C/C++ and Python skills, with experience in software design, debugging, performance analysis, and test development.
  • Experience running AI workloads on large-scale, heterogeneous compute clusters.
  • Familiarity with cluster management and orchestration platforms such as SLURM and Kubernetes (K8s).
  • Experience with GitHub, Jenkins, or similar CI/CD tools and modern development workflows.
  • Undergraduate or Master’s or PhD degree in Computer Science, Computer Engineering, or a related field, or equivalent practical experience.

Nice To Haves

  • Hands-on experience with AI inference or serving frameworks such as vLLM, SGLang, and Llama.cpp.
  • Understanding KV cache transfer mechanisms and technologies (e.g., Mooncake, NIXL/RIXL) and expert parallelization approaches (e.g., DeepEP, MORI, PPLX-Garden).

Responsibilities

  • Distributed AI Enablement and Benchmarking: Enable and benchmark AI models on large-scale distributed systems to evaluate performance, accuracy, and scalability.
  • Scalable Systems Optimization: Optimize AI workloads across scale-up (multi-GPU), scale-out (multi-node), and scale-across distributed system configurations.
  • Cross-Team Collaboration: Collaborate closely with internal GPU library teams to analyze and optimize distributed workloads for high throughput and low latency.
  • Parallelization Strategies: Develop and apply optimal parallelization strategies for AI workloads to achieve best-in-class performance across diverse system configurations.
  • Model Infrastructure and Management: Contribute to distributed model management systems, model zoos, monitoring frameworks, benchmarking pipelines, and technical documentation.
  • Performance Monitoring and Visualization: Build and maintain real-time dashboards reporting performance, accuracy, and reliability metrics for internal stakeholders and external users.

Benefits

  • AMD benefits at a glance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service