About the Team OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe achieving this goal requires real-world deployment and continuous iteration based on how our products are used—and misused—in practice. The Intelligence and Investigations team supports this mission by identifying, analyzing, and investigating misuse of our products, particularly novel or emerging abuse patterns. Our work enables partner teams to develop data-backed product policies and build scalable safety mitigations. By precisely understanding abuse, we help ensure OpenAI’s products can be used safely to build meaningful, legitimate applications. About the Role As a Child Safety Investigator on the Intelligence & Investigations team, you will identify and disrupt actors attempting to use OpenAI’s products to sexually exploit minors both online and in the real world. OpenAI maintains strict prohibitions in this area and reports apparent CSAM and other credible child sexual exploitation threats to the National Center for Missing and Exploited Children (NCMEC), consistent with applicable law and our policies. This role requires domain-specific expertise, technical fluency, and the ability to operate in ambiguous, high-impact situations. You will conduct in-depth investigations into user behavior, analyze product data, identify emerging threat patterns, and support enforcement actions — including escalations requiring legal review and external reporting. You will also help develop detection strategies that proactively surface high-risk behavior, especially cases that evade existing safeguards. This role includes responding to time-sensitive escalations. Investigations may involve exposure to sensitive and disturbing material, including sexual or violent content.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed