The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the
software development kit used to accelerate deep learning and GenAI workloads on
Amazon’s custom machine learning accelerators, Inferentia and Trainium.
The AWS Neuron SDK, developed by the Annapurna Labs team at AWS, is the backbone
for accelerating deep learning and GenAI workloads on Amazon's Inferentia and
Trainium ML accelerators. This comprehensive toolkit includes an ML compiler,
runtime, and application framework that seamlessly integrates with popular ML
frameworks like PyTorch and JAX enabling unparalleled ML inference and training
performance.
The Inference Enablement and Acceleration team is at the forefront of running a
wide range of models and supporting novel architecture alongside maximizing
their performance for AWS's custom ML accelerators. Working across the stack
from PyTorch till the hardware-software boundary, our engineers build systematic
infrastructure, innovate new methods and create high-performance kernels for ML
functions, ensuring every compute unit is fine tuned for optimal performance for
our customers' demanding workloads. We combine deep hardware knowledge with ML
expertise to push the boundaries of what's possible in AI acceleration.
As part of the broader Neuron organization, our team works across multiple
technology layers - from frameworks and kernels and collaborate with compiler to
runtime and collectives. We not only optimize current performance but also
contribute to future architecture designs, working closely with customers to
enable their models and ensure optimal performance. This role offers a unique
opportunity to work at the intersection of machine learning, high-performance
computing, and distributed architectures, where you'll help shape the future of
AI acceleration technology
You will architect and implement business critical features, and mentor a
brilliant team of experienced engineers. We operate in spaces that are very
large, yet our teams remain small and agile. There is no blueprint. We're
inventing. We're experimenting. It is a very unique learning culture. The team
works closely with customers on their model enablement, providing direct support
and optimization expertise to ensure their machine learning workloads achieve
optimal performance on AWS ML accelerators. The team collaborates with open
source ecosystems to provide seamless integration and bring peak performance at
scale for customers and developers.
This role is responsible for development, enablement and performance tuning of a
wide variety of LLM model families, including massive scale large language
models like the Llama family, DeepSeek and beyond. The Inference Enablement and
Acceleration team works side by side with compiler engineers and runtime
engineers to create, build and tune distributed inference solutions with
Trainium and Inferentia. Experience optimizing inference performance for both
latency and throughput on such large models across the stack from system level
optimizations through to Pytorch or JAX is a must have.
You can learn more about Neuron
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/index.html
https://aws.amazon.com/machine-learning/neuron/
https://github.com/aws/aws-neuron-sdk
https://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-success
Key job responsibilities
This role will help lead the efforts in building distributed inference support
for Pytorch in the Neuron SDK. This role will tune these models to ensure
highest performance and maximize the efficiency of them running on the customer
AWS Trainium and Inferentia silicon and servers. Strong software development
using Python, System level programming and ML knowledge are both critical to
this role. Our engineers collaborate across compiler, runtime, framework, and
hardware teams to optimize machine learning workloads for our global customer
base. Working at the intersection of software, hardware, and machine learning
systems, you'll bring expertise in low-level optimization, system architecture,
and ML model acceleration. In this role, you will:
* Design, develop, and optimize machine learning models and frameworks for
deployment on custom ML hardware accelerators.
* Participate in all stages of the ML system development lifecycle including
distributed computing based architecture design, implementation, performance
profiling, hardware-specific optimizations, testing and production deployment.
* Build infrastructure to systematically analyze and onboard multiple models
with diverse architecture.
* Design and implement high-performance kernels and features for ML operations,
leveraging the Neuron architecture and programming models
* Analyze and optimize system-level performance across multiple generations of
Neuron hardware
* Conduct detailed performance analysis using profiling tools to identify and
resolve bottlenecks
* Implement optimizations such as fusion, sharding, tiling, and scheduling
* Conduct comprehensive testing, including unit and end-to-end model testing
with continuous deployment and releases through pipelines.
* Work directly with customers to enable and optimize their ML models on AWS
accelerators
* Collaborate across teams to develop innovative optimization techniques
A day in the life
You will collaborate with a cross-functional team of applied scientists, system
engineers, and product managers to deliver state-of-the-art inference
capabilities for Generative AI applications. Your work will involve debugging
performance issues, optimizing memory usage, and shaping the future of Neuron's
inference stack across Amazon and the Open Source Community. As you design and
code solutions to help our team drive efficiencies in software architecture,
you’ll create metrics, implement automation and other improvements, and resolve
the root cause of software defects.
You will also build high-impact solutions to deliver to our large customer base
and participate in design discussions, code review, and communicate with
internal and external stakeholders. You will work cross-functionally to help
drive business decisions with your technical input. You will work in a
startup-like development environment, where you’re always working on the most
important initiative.
About the team
The Inference Enablement and Acceleration team fosters a builder’s culture where
experimentation is encouraged, and impact is measurable. We emphasize
collaboration, technical ownership, and continuous learning. Our team is
dedicated to supporting new members. We have a broad mix of experience levels
and tenures, and we’re building an environment that celebrates knowledge-sharing
and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but
kind, code reviews. We care about your career growth and strive to assign
projects that help our team members develop your engineering expertise so you
feel empowered to take on more complex tasks in the future. Join us to solve
some of the most interesting and impactful infrastructure challenges in AI/ML
today. Basic Qualifications: - Bachelor's degree in computer science or
equivalent
- 5+ years of non-internship professional software development experience
- 5+ years of non-internship design or architecture (design patterns,
reliability and scaling) of new and existing systems experience
- Fundamentals of Machine learning and LLMs, their architecture, training and
inference lifecycles along with work experience on some optimizations for
improving the model execution.
- Software development experience in C++, Python (experience in at least one
language is required).
- Strong understanding of system performance, memory management, and parallel
computing principles.
- Proficiency in debugging, profiling, and implementing best software
engineering practices in large-scale systems. Preferred Qualifications: -
Familiarity with PyTorch, JIT compilation, and AOT tracing.
- Familiarity with CUDA kernels or equivalent ML or low-level kernels
- Candidates with performant kernel development such as CUTLASS, FlashInfer
etc., would be well suited.
- Familiar with syntax and tile-level semantics similar to Triton.
- Experience with online/offline inference serving with vLLM, SGLang, TensorRT
or similar platforms in production environments.
- Deep understanding of computer architecture, operation systems level software
and working knowledge of parallel computing.
Amazon is an equal opportunity employer and does not discriminate on the basis
of protected veteran status, disability, or other legally protected status.
Los Angeles County applicants: Job duties for this position include: work safely
and cooperatively with other employees, supervisors, and staff; adhere to
standards of excellence despite stressful conditions; communicate effectively
and respectfully with employees, supervisors, and staff to ensure exceptional
customer service; and follow all federal, state, and local laws and Company
policies. Criminal history may have a direct, adverse, and negative relationship
with some of the material job duties of this position. These include the duties
and responsibilities listed above, as well as the abilities to adhere to company
policies, exercise sound judgment, effectively manage stress and work safely and
respectfully with others, exhibit trustworthiness and professionalism, and
safeguard business operations and the Company’s reputation. Pursuant to the Los
Angeles County Fair Chance Ordinance, we will consider for employment qualified
applicants with arrest and conviction records.
Our inclusive culture empowers Amazonians to deliver the best results for our
customers. If you have a disability and need a workplace accommodation or
adjustment during the application and hiring process, including support for the
interview or onboarding process, please visit
https://amazon.jobs/content/en/how-we-hire/accommodations
[https://amazon.jobs/content/en/how-we-hire/accommodations] for more
information. If the country/region you’re applying in isn’t listed, please
contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic
markets. The base pay for this position ranges from $129,300/year in our lowest
geographic market up to $223,600/year in our highest geographic market. Pay is
based on a number of factors including market location and may vary depending on
job-related knowledge, skills, and experience. Amazon is a total compensation
company. Dependent on the position offered, equity, sign-on payments, and other
forms of compensation may be provided as part of a total compensation package,
in addition to a full range of medical, financial, and/or other benefits. For
more information, please visit
https://www.aboutamazon.com/workplace/employee-benefits
[https://www.aboutamazon.com/workplace/employee-benefits]. This position will
remain posted until filled. Applicants should apply via our internal or external
career site.