Amazonposted 2 days ago
$129,300 - $223,600/Yr
Full-time • Mid Level
Cupertino, CA

About the position

AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine learning accelerators and the Trn1 and Inf1 servers that use them. This role is for a software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron. This role is responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive scale large language models like LLama4, Mixtral, DBRX and beyond, as well as stable diffusion, Vision Transformers and many more. The Distributed training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trainium. Experience training these large models using Python is a must. FSDP, Deepspeed and other distributed training libraries are central to this and extending all of this for the Neuron based system is key.

Responsibilities

  • Help lead the efforts building distributed training support into Pytorch and Jax using XLA and the Neuron compiler and runtime stacks.
  • Tune models to ensure highest performance and maximize efficiency running on AWS Trainium.
  • Collaborate with chip architects, compiler engineers, and runtime engineers.

Requirements

  • 3+ years of non-internship professional software development experience.
  • 2+ years of non-internship design or architecture experience of new and existing systems.
  • Experience programming with at least one software programming language.

Nice-to-haves

  • 3+ years of full software development life cycle experience, including coding standards, code reviews, source control management, build processes, testing, and operations.
  • Bachelor's degree in computer science or equivalent.

Benefits

  • Flexible working hours.
  • Mentorship and career growth opportunities.
  • Inclusive team culture.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service