Nvidiaposted 17 days ago
$148,000 - $287,500/Yr
Mid Level
Redmond, WA
Computer and Electronic Product Manufacturing

About the position

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It's a unique legacy of innovation that's fueled by great technology-and amazing people. Today, we're tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what's never been done before takes vision, innovation, and the world's best talent. As an NVIDIAN, you'll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. We are currently hiring an AI/ML Infrastructure Software Engineer at NVIDIA to join our Hardware Infrastructure team. As an Engineer, you will play a crucial role in boosting productivity for our researchers through implementing advancements across the entire stack. Your primary responsibility will involve working closely with customers to identify and resolve infrastructure gaps, enabling innovative AI and ML research on GPU Clusters. Together, we can create powerful, efficient, and scalable solutions as we shape the future of AI/ML technology!

Responsibilities

  • Collaborate closely with our AI and ML research teams to understand their infrastructure needs and obstacles, translating those observations into actionable improvements.
  • Monitor and optimize the performance of our infrastructure ensuring high availability, scalability, and efficient resource utilization.
  • Help define and improve important measures of AI researcher efficiency, ensuring that our actions are in line with measurable results.
  • Collaborate with diverse teams, including researchers, data engineers, and DevOps professionals, to build a seamless and coordinated AI/ML infrastructure ecosystem.
  • Stay on top of the latest advancements in AI/ML technologies, frameworks, and effective strategies, and promote their implementation within the company.

Requirements

  • BS or equivalent experience in Computer Science or related field, with 5+ years of proven experience in AI/ML and HPC workloads and infrastructure.
  • Hands-on experience in using or operating High Performance Computing (HPC) grade infrastructure as well as in-depth knowledge of accelerated computing (e.g., GPU, custom silicon), storage (e.g., Lustre, GPFS, BeeGFS), scheduling & orchestration (e.g., Slurm, Kubernetes, LSF), high-speed networking (e.g., Infiniband, RoCE, Amazon EFA), and containers technologies (Docker, Enroot).
  • Expertise in running and optimizing large-scale distributed training workloads using PyTorch (DDP, FSDP), NeMo, or JAX. Also, possess a deep understanding of AI/ML workflows, encompassing data processing, model training, and inference pipelines.
  • Proficiency in programming & scripting languages such as Python, Go, Bash, as well as familiarity with cloud computing platforms (e.g., AWS, GCP, Azure) in addition to experience with parallel computing frameworks and paradigms.
  • Passion for continual learning and keeping abreast of new technologies and effective approaches in the AI/ML infrastructure field.
  • Excellent communication and collaboration skills, with the ability to work effectively with teams and individuals of different backgrounds.

Benefits

  • Competitive salaries
  • Comprehensive benefits package
  • Equity eligibility
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service