About The Position

Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. Are you passionate about squeezing every last drop of performance out of advanced hardware accelerators? Step into a role where you will shape the future of AI. In this role, you will drive the performance and optimization of both training and serving, delivering massive impact for Google and its global customers. You will need to join the highly interdisciplinary Core ML team. You will have exposure to the newest Tensor Processing Unit (TPU), Graphics Processing Unit (GPU) hardware, the latest ML models, and the advanced toolchains that bridge them. Your work will directly enable AI research and production deployments across Google Cloud, and the broader open-source ecosystem. You will address complex technical issues that directly impact the efficiency and scalability of AI across the industry. The AI and Infrastructure team is redefining what’s possible. We empower Google customers with breakthrough capabilities and insights by delivering AI and Infrastructure at unparalleled scale, efficiency, reliability and velocity. Our customers include Googlers, Google Cloud customers, and billions of Google users worldwide. We're the driving force behind Google's groundbreaking innovations, empowering the development of our cutting-edge AI models, delivering unparalleled computing power to global services, and providing the essential platforms that enable developers to build the future. From software to hardware our teams are shaping the future of world-leading hyperscale computing, with key teams working on the development of our TPUs, Vertex AI for Google Cloud, Google Global Networking, Data Center operations, systems research, and much more.

Requirements

  • Bachelor’s degree or equivalent practical experience.
  • 5 years of experience with software development in C++ or Python.
  • 3 years of experience testing, maintaining, or launching software products, and 1 year of experience with software design and architecture.
  • Experience with performance optimization.

Nice To Haves

  • Experience optimizing TPU/GPU code, using low-level kernel languages like Pallas, Compute Unified Device Architecture (CUDA), or Triton.
  • Knowledge of ML Frameworks (JAX/PyTorch), common operations like attention and Mixture of Experts (MoEs), including model optimization and low-precision formats.
  • Understanding of modern accelerators (e.g., data movement, pipelining, heterogeneous compute, and scale-out).
  • Understanding of compiler principles (optimization, code generation) and toolchains such as MLIR, OpenXLA.
  • Track record of building developer infrastructure, including OSS libraries, flexible high-performance APIs, and easy-to-consume documentation to empower the community.
  • Excellent investigative and problem-solving capabilities with communication skills across cross-functional teams.

Responsibilities

  • Design and optimize high-performance kernels (using languages like Pallas, Mosaic, and Triton) targeting TPU and GPU architectures for critical ML operations, redefining what’s possible from massive training runs to high-speed inference.
  • Architect infrastructure such as benchmarking suites, autotuning frameworks, performance analysis tools, regression testing, documentation transforming how the developer community interacts with increasingly critical custom kernels in key Open-Source Software (OSS) libraries (e.g., Tokamax , vLLM/tpu_inference ).
  • Track the latest advancements in hardware architectures, compiler technologies, and AI models to identify new opportunities for performance optimization through custom kernels.
  • Engage with ML researchers, framework developers (JAX, PyTorch), and compiler engineers (XLA) to enhance adoption, identify new requirements, and address bottlenecks by providing appropriate solutions.

Benefits

  • bonus
  • equity
  • benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service