About The Position

Adobe Applied Science & Machine Learning (ASML) is seeking a Staff Machine Learning Training Framework Engineer to play a critical role in building and scaling the core training systems behind Adobe’s generative AI foundation models. In this role, you will serve as a senior technical owner for key components of our training framework, translating research needs into reliable, scalable, and high‑performance training infrastructure. Rather than focusing on a single model, your work will enable multiple multimodal and video foundation models by strengthening the shared systems used to train them. You will operate at the intersection of applied research and large‑scale systems execution, ensuring that training workflows are robust, reproducible, and performant across large GPU clusters. This role is ideal for a senior engineer who thrives on deep technical ownership, complex execution, and close collaboration with research teams.

Requirements

  • Education: Master’s or PhD degree in Computer Science, Electrical Engineering, or a related field, or equivalent practical experience.
  • Strong Systems Engineering Skills: Proficiency in Python and C++, with experience contributing to large, shared codebases that support multiple users or teams.
  • Proven ML Training Experience: Hands‑on experience training models using PyTorch (or JAX), including multi‑GPU and multi‑node distributed training setups.
  • Distributed Systems Understanding: Solid understanding of synchronization, state management, fault tolerance, and performance tradeoffs in distributed systems.
  • Senior‑Level Execution: Demonstrated ability to independently own complex technical problems, drive solutions to completion, and deliver high‑quality systems relied upon by others.

Nice To Haves

  • Experience supporting large‑scale foundation model training or long-running multi-node training jobs.
  • Familiarity with ML training infrastructure such as DeepSpeed, Accelerate, or internal training platforms.
  • Experience working closely with applied research teams on rapidly evolving model requirements.
  • Exposure to profiling, debugging, and optimizing training performance at scale.

Responsibilities

  • Training Framework Ownership: Own the design and implementation of major components of the training framework, including abstractions for model configuration, optimizer and scheduler integration, checkpointing, and experiment management.
  • Large‑Scale Training Execution: Implement and support distributed training strategies such as PyTorch FSDP, Tensor Parallelism, and Pipeline Parallelism, ensuring correctness, stability, and scalability across multi‑node GPU environments.
  • Reliability & Fault Tolerance: Improve the resilience of long-running training jobs by strengthening restartability, state management, and failure handling mechanisms.
  • Performance‑Aware Framework Design: Identify framework‑level inefficiencies and reduce overhead related to memory usage, communication, or execution orchestration in large training runs.
  • Research Enablement: Partner directly with applied researchers to support new model architectures and training requirements, ensuring the framework adapts quickly to evolving research needs.
  • Training Pipeline Integration: Collaborate with infrastructure and platform teams to integrate the training framework with scheduling, storage, monitoring, and logging systems used in production‑scale environments.

Benefits

  • At Adobe, you will be immersed in an exceptional work environment that is recognized around the world.
  • You will also be surrounded by colleagues who are committed to helping each other grow through our unique Check-In approach where ongoing feedback flows freely.
  • If you’re looking to make an impact, Adobe's the place for you.
  • Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the meaningful benefits we offer.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service