At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: As a Principal AI Infrastructure Solution Engineer, you will partner with AMD’s AI software teams and customers to enable large‑scale LLM training and inference on AMD Instinct GPUs. You will design and validate production‑ready Kubernetes architectures and translate inference frameworks such as vLLM and SGLang into deployable customer solutions. Your work will accelerate customer time‑to‑production and strengthen AMD’s leadership in AI infrastructure. THE PERSON: You are a solution‑oriented AI infrastructure engineer with strong expertise in GPU‑accelerated computing and large‑scale AI deployments. You excel at translating complex technologies into customer‑ready solutions and delivering production‑grade Kubernetes‑based inference and training systems. You bring hands‑on experience with Kubernetes‑native distributed training, including scheduling, topology‑aware GPU placement, and operating resilient, high‑performance AI workloads at scale.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Principal
Education Level
No Education Listed