About The Position

NVIDIA is pioneering the future of autonomous driving! Our comprehensive autonomous driving platform, NVIDIA DRIVE, is used by hundreds of automakers, truck makers, tier-1 suppliers, and robotaxi companies globally. We are looking for a world-class Principal Deep Learning Engineer to join our Autonomous Driving Perception team. In this highly impactful role, you will lead the development of state-of-the-art perception systems that enable our vehicles to understand their environment with superhuman accuracy! You will drive the architectural vision for our core deep learning models, focusing on detection, segmentation, and tracking, and guide these technologies from research to production.

Requirements

  • Ph.D. or MS in Computer Science, Robotics, Machine Learning, Computer Vision, or a related field (or equivalent experience).
  • 12+ years of applied research and software engineering experience, with a heavy emphasis on deep learning for computer vision.
  • Proven Track Record: Demonstrated success as a lead technical contributor in shipping commercial, high-quality deep learning software products to end customers.
  • Domain Expertise: Deep foundational knowledge and hands-on experience in building architectures for object detection, occupancy networks, semantic/instance segmentation, and temporal tracking.
  • Data Intuition: A strong intuition for data-centric AI. Proven experience taking care of massive datasets, defining labeling taxonomies, and building automated pipelines to surface hard examples and edge cases.
  • Engineering Excellence: Strong programming skills in Python and C++, with experience using deep learning frameworks like PyTorch.

Nice To Haves

  • Prior experience specifically within the autonomous driving or robotics industry shipping models deployed on edge compute.
  • Experience with model optimization, quantization, and deployment on embedded platforms (especially using NVIDIA TensorRT).
  • First-author publications at top-tier computer vision or machine learning conferences (e.g., CVPR, ICCV, ECCV, NeurIPS).
  • Experience designing multi-modal perception systems (camera, lidar, radar fusion).

Responsibilities

  • Architect and Innovate: Develop, train, and deploy modern, state-of-the-art deep learning architectures (e.g., Transformers, variants of Transformers, Few Shots Learning) for 3D obstacle detection, dense occupancy prediction, semantic segmentation, and multi-object tracking.
  • Ship High-Quality Products: Drive the end-to-end productization of perception models. You will have ownership of shipping robust, production-grade deep learning features to our global automotive customers, ensuring they meet the highest standards of safety and quality.
  • Lead Corner-Case Driven Development: Champion a rigorous, safety-critical development process. You will proactively identify, mine, and solve long-tail corner cases in sophisticated urban and highway driving environments.
  • Define Data Strategy: Act as the technical authority on data quality. You will define data labeling guidelines, establish quality control metrics, and work closely with data operations to ensure high-fidelity ground truth for sophisticated perception tasks.
  • Technical Leadership: Serve as a technical pillar for the perception organization. You will mentor senior engineers, influence cross-functional teams (planning, mapping, and infrastructure), and set the technical roadmap for next-generation perception architectures.

Benefits

  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service