AI Engineer, Quality

FieldguideSan Francisco, CA
4dOnsite

About The Position

Fieldguide is building AI agents for the most complex audit and advisory workflows. We're a San Francisco-based Vertical AI company building in a $100B+ market undergoing rapid transformation. Over 50 of the top 100 accounting and consulting firms trust us to power their most mission-critical work. We're backed by Bessemer Venture Partners, 8VC, Floodgate, Y Combinator, Elad Gil, and other top-tier investors. As an AI Engineer, Quality, you will own the evaluation infrastructure that ensures our AI agents perform reliably at enterprise scale. This role is 100% focused on making evaluations a first-class engineering capability: building the unified platform, automated pipelines, and production feedback loops that let us evaluate any new model against all critical workflows within hours. You'll work at the intersection of ML engineering, observability, and quality assurance to ensure our agents meet the rigorous standards our customers demand. We're hiring across all levels. We'll calibrate seniority during interviews based on your background and what you're looking to own. This role is for engineers who value in-person collaboration at our San Francisco, CA office.

Requirements

  • Multiple years of experience shipping production software in complex, real-world systems
  • Experience with TypeScript, React, Python, and Postgres
  • Built and deployed LLM-powered features serving production traffic
  • Implemented evaluation frameworks for model outputs and agent behaviors
  • Designed observability or tracing infrastructure for AI/ML systems
  • Worked with vector databases, embedding models, and RAG architectures
  • Experience with evaluation platforms (LangSmith, Langfuse, or similar)
  • Comfort operating in ambiguity and taking responsibility for outcomes
  • Deep empathy for professional-grade, mission-critical software (experience with audit and accounting workflows are not required)

Responsibilities

  • Design and build a unified evaluation platform that serves as the single source of truth for all of our agentic systems and audit workflows
  • Build observability systems that surface agent behavior, trace execution, and failure modes in production, and feedback loops that turn production failures into first-class evaluation cases
  • Own the evaluation infrastructure stack including integration with LangSmith and LangGraph.
  • Translate customer problems into concrete agent behaviors and workflows
  • Integrate and orchestrate LLMs, tools, retrieval systems, and logic into cohesive, reliable agent experiences
  • Build automated pipelines that evaluate new models against all critical workflows within hours of release
  • Design evaluation harnesses for our most complex Agentic systems and workflows
  • Implement comparison frameworks that measure effectiveness, consistency, latency, and cost across model versions
  • Design guardrails and monitoring systems that catch quality regressions before they reach customers
  • Use AI as core leverage in how you design, build, test, and iterate
  • Prototype quickly to resolve uncertainty, then harden systems for enterprise-grade reliability
  • Build evaluations, feedback mechanisms, and guardrails so agents improve over time
  • Work with SMEs and ML Engineers to create evaluation datasets by curating production traces.
  • Design prompts, retrieval pipelines, and agent orchestration systems that perform reliably at scale
  • Define and document evaluation standards, best practices, and processes for the engineering organization
  • Advocate for evaluation-driven development and make it easy for the team to write and run evals
  • Partner with product and ML engineers to integrate evaluation requirements into agent development from day one
  • Take full ownership of large product areas rather than executing on narrow tasks
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service