About The Position

At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next. We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems. While San Francisco and Boston are preferred, we are open to other locations. This Role Is For You If: You are a highly skilled engineer with extensive experience in inference on embedded hardware and a deep understanding of CPU, NPU, and GPU architectures Proficiency in building and enhancing edge inference stacks is essential Strong ML Experience: Proficiency in Python and PyTorch to effectively interface with the ML team at a deeply technical level Hardware Awareness: Must understand modern hardware architecture, including cache hierarchies and memory access patterns, and their impact on performance Proficient in Coding: Expertise in Python, C++, or Rust for AI-driven real-time embedded systems Optimization of Low-Level Primitives: Responsible for optimizing core primitives to ensure efficient model execution Self-Guided and Ownership: Ability to independently take a PyTorch model and inference requirements and deliver a fully optimized edge inference stack with minimal guidance

Requirements

  • Highly skilled engineer with extensive experience in inference on embedded hardware and a deep understanding of CPU, NPU, and GPU architectures
  • Proficiency in building and enhancing edge inference stacks is essential
  • Proficiency in Python and PyTorch to effectively interface with the ML team at a deeply technical level
  • Must understand modern hardware architecture, including cache hierarchies and memory access patterns, and their impact on performance
  • Expertise in Python, C++, or Rust for AI-driven real-time embedded systems
  • Responsible for optimizing core primitives to ensure efficient model execution
  • Ability to independently take a PyTorch model and inference requirements and deliver a fully optimized edge inference stack with minimal guidance

Nice To Haves

  • Experience with mobile development and in cache-aware algorithms will be highly valued

Responsibilities

  • Optimize inference stacks tailored to each platform as we prepare to deploy our models across various edge device types, including CPUs, embedded GPUs, and NPUs
  • Take our models, dive deep into the task, and return with a highly optimized inference stack, leveraging existing frameworks like llama.cpp, Executorch, and TensorRT to deliver exceptional throughput and low latency

Benefits

  • Hands-on experience with state-of-the-art technology at a leading AI company
  • A collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMs

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

51-100 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service