About The Position

Are you looking for an exciting opportunity to join a dynamic and growing team in a fast paced and challenging area? This is a unique opportunity for you to work with Global Technology Applied Research (GTAR) center at JPMorganChase. The goal of GTAR is to design and conduct research across multiple frontier technologies, in order to enable novel discoveries and inventions, and to inform and develop next-generation solutions for the firm’s clients and businesses. As a senior-level engineer in the Global Technology Applied Research (GTAR) center you will design, optimize, and scale large-model pretraining workloads across hyperscale accelerator clusters. This role sits at the intersection of distributed systems, kernel-level performance engineering, and large-scale model training. The ideal candidate can take a fixed hardware budget (accelerator type, node topology, interconnect, and cluster size) and design efficient, stable, and scalable training strategy, spanning parallelism layout, memory strategy, kernel optimization, and end-to-end system performance. This is a hands-on role with direct impact on training throughput, efficiency, and cost at scale.

Requirements

  • Master’s degree with 5+ years of industry experiences, or Ph.D. degree with 3+ years of industry experience in computer science, physics, math, engineering or related fields.
  • Engineering experience at top AI labs, HPC centers, chip vendors, or hyperscale ML infra teams.
  • Strong experience designing and operating large-scale distributed training jobs across multinode accelerator clusters.
  • Deep understanding of distributed parallelism strategies: data parallelism, tensor/model parallelism, pipeline parallelism, and memory/optimizer sharding.
  • Proven ability to profile and optimize training performance using industry standard tools such as Nsight, PyTorch profiler, or equivalent.
  • Hands-on experience with GPU programming and kernel optimization.
  • Strong understanding of accelerator memory hierarchies, bandwidth limitations, and compute-communication tradeoffs.
  • Experience with collective communication libraries and patterns (e.g., NCCL-style collectives).
  • Proficiency in Python for ML systems development and C++ for performance-critical components.
  • Experience with modern ML frameworks such as PyTorch or JAX in large-scale training settings.

Nice To Haves

  • Experience optimizing training workloads on non-GPU accelerators (e.g., TPU, or wafer-scale architectures).
  • Familiarity with compiler-driven ML systems (e.g., XLA, MLIR, Inductor) and graph-level optimizations.
  • Experience designing custom fused kernels or novel execution strategies for attention or large matrix operations.
  • Strong understanding of scaling laws governing large-model pretraining dynamics and stability considerations.
  • Contributions to open-source ML systems, distributed training frameworks, or performance-critical kernels.
  • Prior experience collaborating directly with hardware vendors or accelerator teams.

Responsibilities

  • Design and optimize distributed training strategies for large-scale models, including data, tensor, pipeline, context parallelism.
  • Manage end-to-end training performance: from data input pipelines through model execution, communication, and checkpointing.
  • Identify and eliminate performance bottlenecks using systematic profiling and performance modeling.
  • Develop or optimize high-performance kernels using CUDA, Triton, or equivalent frameworks.
  • Design and optimize distributed communication strategies to maximize overlap between computation and inter-node data movement.
  • Design memory-efficient training configurations (caching, optimizer sharding, checkpoint strategies).
  • Evaluate and optimize training on multiple accelerator platforms, including GPUs and non-GPU accelerators.
  • Contribute towards incorporating performance improvements back to internal pipelines.

Benefits

  • We offer a competitive total rewards package including base salary determined based on the role, experience, skill set and location.
  • Those in eligible roles may receive commission-based pay and/or discretionary incentive compensation, paid in the form of cash and/or forfeitable equity, awarded in recognition of individual achievements and contributions.
  • We also offer a range of benefits and programs to meet employee needs, based on eligibility.
  • These benefits include comprehensive health care coverage, on-site health and wellness centers, a retirement savings plan, backup childcare, tuition reimbursement, mental health support, financial coaching and more.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service