Our Deep Learning models performance engineering team at NVIDIA is hiring software engineers at all experience levels to build and optimize the libraries and tools that enable Deep Learning Researchers and Engineers to design, develop, and deploy efficient AI applications. We are an ambitious and diverse team that builds optimizations directly into mainstream open source Deep Learning frameworks - PyTorch and JAX, which boost the performance at all levels of NVIDIA's AI stack. Our team has a wide collaborative footprint, working not only with multiple teams across NVIDIA but also with the broader open-source community to deliver SOTA Deep Learning performance on the best AI platform in the world! What you will be doing: Build and support Transformer Engine, the open-source library for accelerating the training of Large Language Models. Collaborate on systems research that improves Deep Learning model performance, such as training using extremely low precision, parallelism methods, etc. Implement, benchmark, and optimize new Deep Learning models such as LLMs straight out of groundbreaking research to scale efficiently on NVIDIA GPUs and systems. Build and contribute to NVIDIA submissions on community benchmarks such as MLPerf. Engage with the open-source community as well as support enterprise customers and partners by delivering the benefits of NVIDIA’s latest hardware and software innovations. Influence the design of new hardware generations and core platform software components for NVIDIA hardware and systems.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level