You could be the one who changes everything for our 28 million members. Centene is transforming the health of our communities, one person at a time. As a diversified, national organization, you’ll have access to competitive benefits including a fresh perspective on workplace flexibility. Must be authorized to work in the U.S. without the need for employment-based visa sponsorship now or in the future. Sponsorship and future sponsorship are not available for this opportunity, including employment-based visa types H-1B, L-1, O-1, H-1B1, F-1, J-1, OPT, or CPT. Position Purpose: Leads the technical evaluation and assurance efforts within our AI Governance team. Establishes enterprise-grade, decision-relevant methodologies for red teaming, testing, and evaluating AI systems across traditional ML, Generative AI, and Agentic AI applications, ensuring evaluations directly inform AI governance decisions, deployment readiness, and ongoing oversight. Develops reproducible frameworks to measure AI value, user impact, and broader outcomes to support responsible scaling, risk acceptance, and investment decisions Designs rigorous evaluation methodologies for assessing AI system performance, safety, reliability, and alignment with intended use across the AI lifecycle, from development through deployment and monitoring Develops criteria and benchmarks to determine whether existing evaluations are adequate and sufficient for different AI applications and risk profiles Designs and executes comprehensive red team exercises to identify vulnerabilities, failure modes, and unintended behaviors across diverse AI systems and devise solutions to address them Develops rigorous evaluation methodologies and criteria to assess whether existing evaluations are adequate and sufficient for different AI applications and risk profiles Establishes standards for evaluation coverage, rigor, and documentation across the AI lifecycle Establishes reproducible methodologies for measuring business value, user impact, and societal outcomes of AI systems using causal inference and experimental design Advances the scientific understanding of AI evaluation and safety through white papers and trainings Provides technical leadership and mentorship to scientists, engineers, and compliance professionals while building organizational evaluation capabilities Stays at the forefront of AI safety research and identify novel risks emerging from advanced AI capabilities, particularly in frontier models Translates complex technical findings into actionable recommendations for leadership, governance boards, and cross-functional teams Collaborates with external researchers, institutions, and industry partners to advance evaluation methodology and contribute to the broader AI safety community Performs other duties as assigned Complies with all policies and standards
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level