The Global Red Team Tester will conduct testing of AI models to find vulnerabilities, develop new evaluation frameworks, communicate risks to stakeholders across the global teams and the network of member firms, collaborate with defensive and development teams, and mentor junior team members. % of time Accountability 10% - Design and execute comprehensive adversarial testing campaigns against AI models, including but not limited to large language models, multimodal systems, and autonomous agents. 10% - Research attack vectors and prompt injection techniques to identify model vulnerabilities, jailbreaks, and unintended behaviors. Evaluate and introduce AI Security tools that decrease mean time to detect and respond to AI-specific threats. 50% - Conduct red team exercises simulating real-world deployment scenarios and edge cases. Systematically probe for bias, toxicity, misinformation generation, and other harmful outputs across diverse contexts and demographics. Improve firmâs security posture against emerging AI threats. Concentration is on executing standardized security tests. 10% - Create adversarial datasets and benchmarks to evaluate model robustness under various attack conditions. 10% - Collaborate with security teams (e.g., GSOC, member firm security teams, developers) to perform red teaming activities, and document and present findings, vulnerabilities, and remediation recommendations to drive the mitigation of identified risks. 10% - Develop and maintain process documentation, create reports on testing activities, track operational metrics, and perform other programmatic tasks as required to support the AI Red Team's function.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level