Senior Scaled Abuse Scientist

DiscordSan Francisco, CA
1d

About The Position

Discord is used by over 200 million people every month for many different reasons, but there’s one thing that nearly everyone does on our platform: play video games. Over 90% of our users play games, spending a combined 1.5 billion hours playing thousands of unique titles on Discord each month. Discord plays a uniquely important role in the future of gaming. We are focused on making it easier and more fun for people to talk and hang out before, during, and after playing games. Scaled abuse Countermeasures And Research (SCAR) is Discord's frontline team responsible for detecting, investigating, and disrupting scaled abuse including spam, scams, account compromise, payments fraud, and other platform manipulation. We sit at the intersection of Data Science, Machine Learning, and Anti-Abuse Engineering, and we operate on two tracks: rapid response to active threats and proactive research into emerging ones. As a Senior Scaled Abuse Scientist, you'll protect hundreds of millions of users while shaping the Safety strategy that keeps Discord trustworthy and resilient. Your research will directly influence what Safety ML builds and what Product prioritizes. This role operates on a dual track: roughly ~50% incident response and operations, and ~50% proactive threat research. In practice, that balance shifts with the threat environment: when attacks are active, you're on the front lines; when things are quieter, you're building the knowledge and signals that prevent the next wave. Successful candidates bring a scientist's curiosity and an operator's urgency. You're as comfortable digging through a dataset during an incident as you are writing a research proposal for a threat vector nobody's named yet. You'll report directly to the SCAR manager.

Requirements

  • 3+ years of experience in Trust & Safety, with a track record of proactive, innovative approaches to combating scaled abuse, fraud, or adversarial threats
  • 5+ years working with Python or similar scripting languages for data analysis in a production or investigative context, and a high level of proficiency with SQL on large datasets
  • A genuine passion for protecting users: tenacious, creative, empathetic, and able to sustain that energy in an adversarial environment that never fully quiets down
  • Strong investigative instincts: you get to root causes, not just symptoms, and you know how to separate signal from noise
  • Ability to think from first principles, approaching complex problems with creativity, clear reasoning, and pragmatic solutions
  • Fluency with the adversarial mindset: you understand threat actor incentives, how abuse economies work, and how bad actors adapt to detections
  • Experience designing and running experiments in production: formulating hypotheses, measuring impact, and knowing when you can and can't run a structured test
  • Excellent communication and collaboration skills, with a history of partnering effectively across engineering, data science, legal, policy, and product teams.
  • A growth mindset: seeking feedback, reflecting on decisions, and continuously improving

Nice To Haves

  • Experience developing ML models or feature engineering to improve existing classifiers
  • Background in statistics or causal inference, particularly in applied or production settings
  • Familiarity with internet infrastructure signal (IP reputation, proxy detection, domain analysis) and how it informs abuse detection
  • Background in Security Engineering, Fraud, or a related adversarial discipline outside of traditional Trust & Safety

Responsibilities

  • Investigate and disrupt active scaled abuse threats across spam, fraud, account compromise, and platform manipulation, such as designing and deploying heuristic rules and ML-based detections to stop bad actors
  • Conduct proactive threat research into emerging abuse vectors, studying adversarial incentives, ecosystem dynamics, and platform vulnerabilities before they become big problems
  • Develop and operationalize new metrics and signals that bring visibility into previously unmeasured problem spaces
  • Propose projects and investments and build buy-in from Safety leadership and XFN stakeholders
  • Consult with Product teams on features in development, proposing abuse mitigations and safety-by-design recommendations before launch
  • Serve as an on-call responder for scaled abuse incidents: triaging escalations, driving rapid investigations, and communicating clearly to Safety leadership and XFN partners under pressure
  • Design and run experiments to measure the impact of detections and interventions, including in environments where structured experimentation isn't possible
  • Collaborate closely with Safety ML, Safety Engineering, T&S Operations, and CX on complex, time-sensitive issues
  • Mentor and support teammates, contributing to a culture of rigor, knowledge-sharing, and continuous improvement
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service