AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine learning accelerators and the Trn1 and Inf1 servers that use them. This role is for a software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron. This role is responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive scale large language models like LLama4, Mixtral, DBRX and beyond, as well as stable diffusion, Vision Transformers and many more. The Distributed training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trainium. Experience training these large models using Python is a must. FSDP, Deepspeed and other distributed training libraries are central to this and extending all of this for the Neuron based system is key.