AI Program Lead, ERM

Liberty Mutual InsuranceBoston, MA
1d

About The Position

We are seeking a strategic and execution-oriented Responsible AI (RAI) Program Lead to own and evolve the enterprise Responsible AI risk governance framework. This role is accountable for ensuring our use of AI technologies is safe, ethical, and aligned with the firm’s values, risk appetite, and regulatory expectations. Reporting to the Chief Risk Officer, the RAI Program Lead will design, operate, and continuously improve our Responsible AI governance program across the enterprise. This includes defining and maintaining the RAI operating model, policy and process infrastructure, and governance forums, as well as driving organization-wide awareness and adoption. The role serves as a central risk and governance point of coordination across business, technology, legal, risk, and compliance functions, embedding Responsible AI considerations into day-to-day AI decision-making and delivery. This is a risk ownership and governance role within the second line of defense. While the role does not directly build AI systems or tooling, it partners closely with teams that do, providing independent risk perspective, guidance, and oversight. This is an individual contributor role with enterprise-wide influence, executed through partnership and collaboration across functions.

Requirements

  • Competencies typically acquired through a Bachelor`s degree in a quantitative field and 10+ years of relevant experience
  • 7+ years of experience in risk management, governance, program management, or policy roles at the intersection of technology, data, and compliance
  • Demonstrated experience owning or being accountable for enterprise-level governance programs related to Responsible AI, data ethics, model risk, or similar domains
  • Working knowledge of AI and machine learning lifecycles, with familiarity with common AI risks (e.g., bias, explainability, privacy, model misuse)
  • Experience engaging with AI regulatory and ethical frameworks (e.g., NIST AI RMF, OECD AI Principles, ISO AI standards)
  • Proven ability to influence and partner with senior stakeholders across business, technology, legal, and risk functions in a matrixed environment
  • Strong written and verbal communication skills, with the ability to translate

Nice To Haves

  • Advanced degree (MBA or equivalent) is highly preferred, as is having a professional qualification in one or more areas of enterprise risk management or its equivalent

Responsibilities

  • Own and operationalize the enterprise Responsible AI program roadmap, including capabilities, milestones, KPIs, and maturity assessments
  • Partner with senior stakeholders to integrate Responsible AI objectives into enterprise strategy, data and model governance, and AI-enabled product development
  • Lead the operation of Responsible AI governance forums (e.g., SteerCo, working groups), including agenda-setting, materials, action tracking, and executive reporting
  • Develop, maintain, and evolve Responsible AI policies, standards, and procedures aligned with internal risk appetite and emerging global regulation, in close partnership with Model Risk Management, Third Party Risk Management, Legal, Compliance, and Enterprise Risk Management
  • Design and maintain scalable Responsible AI processes for AI risk assessments, use case reviews, approvals, and issue escalation within a federated operating model
  • Provide risk oversight of AI/ML use cases across their lifecycle, including risk tiering, documentation standards, and lifecycle controls
  • Identify, assess, and escalate material Responsible AI risks and control gaps to appropriate governance forums and senior leadership
  • In partnership with Legal and Compliance, drive enterprise Responsible AI awareness and training through learning programs, communications, and community or ambassador networks
  • Collaborate with Talent, Technology, and Learning partners to embed Responsible AI principles into onboarding, role-based expectations, and ways of working
  • Serve as a key liaison to Legal, Compliance, Risk, and Audit to ensure alignment with regulatory expectations and internal control frameworks
  • Monitor evolving AI technologies, internal use cases, and external regulations and standards (e.g., EU AI Act, U.S. Executive Orders, ISO/IEC 42001), and recommend program and policy updates to governance bodies as needed
  • Define and deliver regular reporting on Responsible AI program effectiveness, issues, and risk trends to executive leadership and board-level committees
  • Support external disclosures, regulatory inquiries, and internal audits related to AI governance, as required

Benefits

  • comprehensive benefits
  • workplace flexibility
  • professional development opportunities
  • Employee Resource Groups
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service