About The Position

We're seeking an AI Enablement Engineer to join our team in building, deploying, and scaling AI solutions across the enterprise. You'll work at the intersection of machine learning engineering, data engineering, and platform operations—building RAG systems, evaluating cutting-edge LLM models, and serving as a trusted advisor to business units and corporate functions navigating their AI adoption journey.

Requirements

  • 6+ years of experience in any of the following: Site reliability engineering (SRE), Software engineering, Data engineering (DBA/DBRE), or Machine learning engineering
  • Strong proficiency in Python and experience building data pipelines
  • Hands-on experience with LLMs and understanding of prompting techniques, fine-tuning, and RAG architectures
  • Experience with vector databases (e.g., Pinecone, Weaviate, pgvector, ChromaDB)
  • Familiarity with cloud platforms, particularly AWS services
  • Excellent communication skills with ability to explain technical concepts to non-technical stakeholders
  • Demonstrated ability to work independently and drive projects from concept to production

Nice To Haves

  • Experience with Amazon Bedrock, SageMaker, or similar managed ML services
  • Familiarity with Ollama or other local LLM deployment frameworks
  • Background in evaluating and benchmarking ML models
  • Experience with AI-assisted coding tools (Claude Code, Cursor, GitHub Copilot, etc.)
  • Understanding of MLOps practices and tools
  • Previous consulting or internal enablement experience
  • Knowledge of responsible AI practices and governance frameworks

Responsibilities

  • Design, build, and optimize Retrieval-Augmented Generation (RAG) systems for various enterprise use cases
  • Develop and maintain data ingestion pipelines (Python and other scripting languages) to populate and manage vector databases
  • Deploy and maintain AI infrastructure including Amazon Bedrock and Ollama environments
  • Implement evaluation frameworks and tooling (such as LLMComparator) to systematically compare LLM performance across different models and use cases
  • Conduct rigorous evaluations of LLM models for diverse applications including RAG, coding agents, and domain-specific tasks
  • Establish benchmarking standards and best practices for model selection
  • Stay current with the rapidly evolving LLM landscape and provide recommendations on emerging capabilities
  • Partner with teams, business units, and corporate functions to assess their AI needs and design appropriate solutions
  • Provide guidance on build vs. buy decisions—advising when third-party GenAI tools are appropriate versus when internal capabilities should be leveraged
  • Enable teams in adopting AI-assisted development tools including Claude Code, Cursor, and GitHub Copilot
  • Develop documentation, training materials, and best practices for enterprise AI adoption
  • Ensure reliability, performance, and cost-effectiveness of AI infrastructure and services
  • Monitor and optimize vector database performance and model inference costs
  • Collaborate with security and compliance teams to ensure responsible AI deployment

Benefits

  • Opportunity to shape AI strategy and adoption at the enterprise level
  • Work with cutting-edge AI technologies and tools
  • High-visibility role with exposure to leadership and diverse business units
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service