GPU Performance Engineer | Experienced Hire

Susquehanna International Group, LLP
1dOnsite

About The Position

We are looking for a GPU Performance Engineer to build highly optimized CUDA kernels for low-latency inference. This role is focused on workloads where off-the-shelf runtimes and vendor libraries do not fully exploit the structure of the model, and where custom kernels, memory layouts, and execution strategies can deliver meaningful gains. You will work closely with quantitative researchers and engineers to understand model structure, identify computational bottlenecks, and turn mathematical ideas into production-grade GPU implementations. You will use your understanding of GPU hardware to help shape models that are both mathematically effective and efficient to run. The problems span compact neural networks, tree-based models, and other structured inference workloads where latency, throughput, and efficiency all matter. This role is a strong fit for someone who enjoys low-level optimization, performance analysis, and translating abstract models into hardware-efficient code.

Requirements

  • Strong proficiency in writing and optimizing CUDA kernels
  • Solid programming experience in C/C++ (preferred)
  • Deep understanding of GPU architecture, including memory hierarchy, SIMT execution, occupancy, and latency/throughput tradeoffs
  • Ability to reason about numerical stability, precision, performance tradeoffs, and how model design choices affect hardware efficiency
  • Strong problem-solving skills and comfort working with low-level systems

Nice To Haves

  • PhD in Mathematics, Physics, Computer Science, Engineering, or related quantitative field
  • Strong background in linear algebra, probability, numerical methods, or scientific computing
  • Experience working with quantitative research teams or financial models
  • Demonstrated ability to improve real-world inference performance beyond baseline framework or library implementations
  • Familiarity with PTX-level behavior, tensor core utilization, or architecture-specific tuning
  • Exposure to ONNX Runtime, TensorRT, Triton, TVM, or similar systems
  • Exposure to: Neural networks, Tree-based models (e.g., LightGBM), State space models (e.g., Mamba architectures)
  • Experience with kernel fusion, custom operators, model compilation, or graph-level optimization

Responsibilities

  • Design, implement, and optimize custom CUDA kernels for latency-critical inference workloads
  • Develop fine-grained GPU implementations tailored to specific model structures
  • Analyze quantitative research models and computational bottlenecks to identify opportunities for parallelization and hardware-efficient execution
  • Collaborate directly with quantitative researchers to translate mathematical models into high-performance compute pipelines
  • Optimize end-to-end inference performance through kernel tuning, memory-layout design, execution strategy, I/O optimization, and precision tradeoffs
  • Profile and benchmark GPU performance
  • Improve latency and throughput in production inference systems
  • Contribute to GPU architecture decisions and performance best practices

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service