D-Matrix-posted 3 days ago
Senior
Hybrid • Santa Clara, CA
Publishing Industries
Craft a resume that recruiters will want to see with Teal's resume Matching Mode

d-Matrix is seeking an experienced AI Applications Engineer to drive the successful deployment and support of d-Matrix's cutting-edge AI products and solutions, specifically in the realm of generative AI inference and AI/ML software support. In this highly technical role, you will work closely with customers and internal teams to resolve complex software, hardware, and firmware challenges related to AI workloads. The ideal candidate will have expertise in AI/ML infrastructure, with a focus on inference solutions and performance optimization for data center environments. This position requires a strong blend of engineering acumen and customer-facing skills to ensure the seamless deployment and continued success of our products.

  • Provide expert guidance and support to customers deploying generative AI inference models, including assisting with integration, troubleshooting, and optimizing AI/ML software stacks on d-Matrix hardware.
  • Perform functional and performance validation testing, ensuring that AI models run efficiently and meet customer expectations.
  • Evaluate throughput and latency performance for d-Matrix accelerators, profile workloads to identify bottlenecks, and optimize performance (including quantization, custom kernel development and so on).
  • Collaborate with internal engineering and product teams to produce developer guides, technical notes, and other supporting materials that ease the adoption of d-Matrix AI/ML solutions.
  • Bachelors or Masters degree in Electrical Engineering, Computer Engineering, Computer Science, or related field with 10+ years of experience.
  • In-depth knowledge and hands-on experience with generative AI inference at scale, including the integration and deployment of AI models in production environments.
  • Experience with automation tools and scripting languages (Linux or Windows shell scripting, Python, Go) to streamline deployment, monitoring, and issue resolution processes.
  • Hands-on experience with AI/ML infrastructure accelerators (e.g., GPUs, TPUs) and expertise in optimizing performance for generative AI inference workloads.
  • Ability to communicate complex technical concepts to diverse audiences, from developers to business stakeholders.
  • Prior experience in customer facing roles for enterprise-level AI and datacenter products, with a focus on AI/ML software and generative AI inference with GPUs or accelerators.
  • Understanding of domain-specific hardware architectures (for example, GPUs, ML accelerators, SIMD vector processors, and DSPs) and how to map ML algorithms to an accelerator architecture.
  • Strong analytical skills with a proven track record of solving complex problems in AI/ML systems, including performance optimization and troubleshooting in AI/ML frameworks.
  • Extensive experience with the deployment of AI/ML frameworks such as PyTorch, OpenAI Triton, VLLM, and familiarity with container orchestration platforms like Kubernetes.
  • Excellent communication and presentation skills, with a demonstrated ability to guide customers through complex AI/ML system integration and troubleshooting.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service