Principal Architect, Performance Analysis and Modeling

d-MatrixSanta Clara, CA
2dHybrid

About The Position

At d-Matrix, we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration. We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI. Working onsite at our Santa Clara, CA headquarters 3 days per week Hybrid. The role: Principal Architect- Performance Analysis and Modeling d-Matrix is seeking outstanding computer architects to help accelerate AI application performance at the intersection of both hardware and software, with particular focus on emerging hardware technologies (such as DIMC, D2D, 3D-DRAM etc.) and emerging workloads (such as generative inference etc.). Our acceleration philosophy cuts through the system ranging from efficient tensor cores, storage, and data movements along with co-design of dataflow, and collective communication techniques.

Requirements

  • BSEE with 10+ years of industry experience or MSEE preferred with 8+ years of industry experience.
  • Solid grasp through academic or industry experience in multiple of the relevant areas – computer architecture, hardware software codesign, performance modeling, ML fundamentals (particularly DNNs).
  • Programming fluency in C/C++ or Python.
  • Experience with developing analytical performance models, architecture simulators for performance analysis,
  • Self-motivated team player with strong sense of collaboration and initiative.

Nice To Haves

  • Research background with publication record in top-tier architecture, or machine learning venues is a huge plus (such as ISCA, MICRO, ASPLOS, HPCA, DAC, MLSys etc.).

Responsibilities

  • As a member of the architecture team, you will analyze the latest ML workloads (multi-modal LLMs, CoT reasoning models, video/audio-generation)
  • You will contribute Hardware and Software features that power the next generation of inference accelerators in datacenters.
  • This role requires to keep up the latest research in ML Architecture and Algorithms, and collaborate with different partner teams including Product, Hardware design, Compiler, Inference Server, Kernels.
  • Your day-to-day work will include (1) analyzing the properties of emerging machine learning algorithms and workloads and identifying functional, performance implications (2) Creating analytical models to project performance on current and future generations of d-matrix hardware (3) proposing new HW/SW features to enable or accelerate these algorithms
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service