Inference Frontend

Cerebras SystemsSunnyvale, CA
1d

About The Position

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.

Requirements

  • Recently graduated or enrolled in a university program with a degree in Computer Science, Computer Engineering, or other related disciplines (graduating 2026). This is a new graduate position.
  • Strong problem-solving skills and excellent communication skills.
  • Proficient in one or more programming language – exposure and experience with C++ is an asset.

Responsibilities

  • Collaborate with world-class engineers on real-world challenges across the software stack.
  • Design, implement, and test software solutions that directly impact system performance and usability.
  • Learn and contribute across multiple layers of a fully integrated AI-accelerated system.
  • Gain hands-on experience with advanced hardware, compilers, distributed systems, and ML frameworks.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service