Software Engineer 3

MongoDBSeattle, WA
1dHybrid

About The Position

We’re looking for a Software Engineer 3 to help build the next-generation inference platform that supports embedding models used for semantic search, retrieval, and AI-native experiences in MongoDB Atlas. You’ll join the broader Search and AI Platform organization and collaborate with ML researchers and engineers from our Voyage.ai acquisition. Together, we’re building infrastructure for real-time, low-latency, and high-scale inference — fully integrated with Atlas and designed for developer-first experiences. As a Software Engineer 3, you'll focus on building core systems and services that power model inference at scale. You'll own key components of the infrastructure, work across teams to ensure tight integration with Atlas, and contribute to a platform designed for reliability, performance, and ease of use. This role is based in Palo Alto, CA or Seattle, WA with an in-office or hybrid work model.

Requirements

  • 2+ years of experience building backend or infrastructure systems at scale
  • Strong software engineering skills in languages such as Go, Rust, Python, or C++, with an emphasis on performance and reliability
  • Experienced in cloud-native architectures, distributed systems, and multi-tenant service design
  • Familiar with concepts in ML model serving and inference runtimes, even if not directly deploying models
  • Knowledge of vector search systems (e.g., Faiss, HNSW, ScaNN) is a plus
  • Comfortable working across functional teams, including ML researchers, backend engineers, and platform teams
  • Motivated to work on systems integrated into MongoDB Atlas and used by thousands of developers

Nice To Haves

  • Experience integrating infrastructure with production ML workloads
  • Understanding of hybrid retrieval, prompt-driven systems, or retrieval-augmented generation (RAG)
  • Contributions to open-source infrastructure for ML serving or search

Responsibilities

  • Design and build components of a multi-tenant inference platform integrated directly with MongoDB Atlas, supporting semantic search and hybrid retrieval
  • Collaborate with AI engineers and researchers to productionize inference for embedding models and rerankers — enabling both batch and real-time use cases
  • Contribute to platform capabilities such as latency-aware routing, model versioning, health monitoring, and observability
  • Improve performance, autoscaling, GPU utilization, and resource efficiency in a cloud-native environment
  • Work across product, infrastructure, and ML teams to ensure the inference platform meets the scale, reliability, and latency demands of Atlas users
  • Gain hands-on experience with tools like vLLM and container orchestration with Kubernetes

Benefits

  • From employee affinity groups, to fertility assistance and a generous parental leave policy , we value our employees’ wellbeing and want to support them along every step of their professional and personal journeys.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service