AI Red Tester

DistroIselin, NJ
1d$45 - $55Remote

About The Position

The Global Red Team Tester will conduct testing of AI models to find vulnerabilities, develop new evaluation frameworks, communicate risks to stakeholders across the global teams and the network of member firms, collaborate with defensive and development teams, and mentor junior team members. % of time Accountability 10% - Design and execute comprehensive adversarial testing campaigns against AI models, including but not limited to large language models, multimodal systems, and autonomous agents. 10% - Research attack vectors and prompt injection techniques to identify model vulnerabilities, jailbreaks, and unintended behaviors. Evaluate and introduce AI Security tools that decrease mean time to detect and respond to AI-specific threats. 50% - Conduct red team exercises simulating real-world deployment scenarios and edge cases. Systematically probe for bias, toxicity, misinformation generation, and other harmful outputs across diverse contexts and demographics. Improve firm’s security posture against emerging AI threats. Concentration is on executing standardized security tests. 10% - Create adversarial datasets and benchmarks to evaluate model robustness under various attack conditions. 10% - Collaborate with security teams (e.g., GSOC, member firm security teams, developers) to perform red teaming activities, and document and present findings, vulnerabilities, and remediation recommendations to drive the mitigation of identified risks. 10% - Develop and maintain process documentation, create reports on testing activities, track operational metrics, and perform other programmatic tasks as required to support the AI Red Team's function.

Requirements

  • Minimum 5 years of penetration testing or red team operations experience.
  • Computer science, information technology, or cybersecurity degree (Bachelor's or higher preferred) from an accredited college or university or equivalent work experience.
  • Background in AI red teaming, web application penetration testing, application/network penetration testing, red team operations, or cyber security.
  • Familiarity with threat intelligence.
  • Understanding of Artificial Intelligence, Machine Learning, software applications, cloud computing, and networking. Excellent communication and stakeholder management skills. Ability to work effectively across geographies and time zones.
  • Testing tools: SPLX, Garak, TextAttack, PyRIT, Burp Suite, Metasploit, Nessus, Cobalt Strike/Mythic/C2, NMAP, sqlmap, or similar tools (familiar with some of these tools or similar)
  • Web application technologies and layer 7 protocols (HTTP, DNS, FTP)
  • Technical experience in Generative AI, Agents, A2A, and/or MCP
  • Programming languages: Python, Ruby, Go, PowerShell, bash or similar (familiar with some of these languages or similar)

Nice To Haves

  • Certifications: OSCP, GPEN, GPXN, AI Red Teamer Job Role Path (HackTheBox), AISEC+, or other industry AI or Red Team related certifications desired

Responsibilities

  • Design and execute comprehensive adversarial testing campaigns against AI models, including but not limited to large language models, multimodal systems, and autonomous agents.
  • Research attack vectors and prompt injection techniques to identify model vulnerabilities, jailbreaks, and unintended behaviors. Evaluate and introduce AI Security tools that decrease mean time to detect and respond to AI-specific threats.
  • Conduct red team exercises simulating real-world deployment scenarios and edge cases. Systematically probe for bias, toxicity, misinformation generation, and other harmful outputs across diverse contexts and demographics. Improve firm’s security posture against emerging AI threats. Concentration is on executing standardized security tests.
  • Create adversarial datasets and benchmarks to evaluate model robustness under various attack conditions.
  • Collaborate with security teams (e.g., GSOC, member firm security teams, developers) to perform red teaming activities, and document and present findings, vulnerabilities, and remediation recommendations to drive the mitigation of identified risks.
  • Develop and maintain process documentation, create reports on testing activities, track operational metrics, and perform other programmatic tasks as required to support the AI Red Team's function.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service