About The Position

We are looking for a Research Scientist or Engineer to join our Foundation Model Evaluation team. In this role, you will design and build evaluation methodology that measures what matters - how well our models perform at the frontier of key capabilities, and how well they serve real users across Apple products on billions of active devices. You will turn evaluation insights into signals that make models better. This is a hands-on role focused on the models that power Apple products used daily by over a billion people. You will design evaluation systems where the outcome is not just a score, but an actionable signal - one that drives model improvement and predicts real user experience. Working alongside model training and product teams, you will close the loop between evaluation and improvement. Our work spans three areas: • Frontier capability assessment: benchmarking against the state of the art in reasoning, code, knowledge, and agentic workflows • Product-aligned evaluation: measuring model quality in ways that reflect real user experience • Evaluation-to-training integration: feeding actionable insights back into the model development cycle You may focus on one area or work across multiple, depending on your background and interests.

Requirements

  • 3+ years of experience in AI model evaluation, NLP, or a related area (e.g., natural language generation, information retrieval, or conversational AI)
  • Strong fundamentals in machine learning, natural language processing, and statistical analysis
  • Proficiency in Python and experience with ML frameworks (PyTorch, JAX, or equivalent)
  • Demonstrated ability to translate research insights into practical implementations
  • Strong experimental design skills: ability to design rigorous comparisons and draw valid conclusions from results
  • Clear technical communication: ability to distill evaluation results into actionable recommendations for cross-functional partners
  • MS or PhD in Computer Science, Machine Learning, Natural Language Processing or a related technical field. Equivalent practical experience will be considered.

Nice To Haves

  • PhD in Computer Science, Machine Learning, NLP, or a related field
  • Direct experience evaluating large language models, e.g. benchmark design, model-based judging
  • Track record of collaborating with model training and data teams to turn evaluation findings into training improvements
  • Experience building reusable evaluation tooling or analysis frameworks adopted across teams
  • Familiarity with human evaluation methodology and experience partnering with annotation teams or vendors to assess model quality

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service