Multimodal Red Team Expert

Reinforce Labs, Inc.
4dRemote

About The Position

We are looking for a creative “breaker” to join our team as a Multimodal Red Team Specialist. In this role, you won’t just be prompting AI models—you’ll be stress-testing them across modalities. Think adversarial image-text pairings, visual prompt injection, manipulated media, and cross-modal exploits that slip past safety classifiers designed to catch text alone. You’ll generate adversarial multimodal content and evaluate model outputs against structured safety taxonomies—probing the seams where vision, language, and audio intersect. If you think in compositions rather than single inputs, this is your role. This is an asynchronous, remote position designed for self-starters who thrive in the gray areas between visual media, linguistics, and security.

Requirements

  • Heavy Multimodal AI Usage — hands-on experience with vision-language models, image generation systems, and multimodal assistants (open- and closed-source). You’ve pushed these systems and know where they crack.
  • You have a “hacker mindset” that extends to visual media. You don’t just think about what to type—you think about what image to pair it with, what metadata to embed, what visual context shifts the meaning.
  • You’re visually literate. You understand framing, context manipulation, and how images carry implicit meaning that models may misread or miss entirely.
  • You can turn a chaotic afternoon of multimodal prompt-hacking into a clean, calibrated, actionable report with severity ratings and reproducible examples.
  • You understand the weight of this work. You can handle sensitive or “dark” content across text and visual modalities—professionally and within ethical boundaries.
  • You’re comfortable with ambiguity. Multimodal harms are often more subjective than text-only harms, and you can make consistent judgment calls without needing every case to be clear-cut.
  • Proven ability to navigate complex model restrictions using creative evasion techniques—across text and visual input channels.
  • Proficiency with image manipulation and generation tools (Photoshop, GIMP, Stable Diffusion, Midjourney, or equivalent). You can create the adversarial content, not just describe it.
  • Familiarity with AI safety concepts: content policy taxonomies, harm severity frameworks, false refusal vs. false compliance tradeoffs.
  • Awareness of visual misinformation vectors: deepfakes, cheapfakes, manipulated screenshots, and synthetic media.
  • You don’t give up when a model says “I cannot fulfill this request.” You find a new angle—and when the text angle is exhausted, you try an image.

Nice To Haves

  • Background in content moderation, digital forensics, OSINT, offensive security, or red teaming is a major plus.
  • Experience with structured annotation workflows, rubric-driven evaluation, and inter-annotator agreement processes is a plus.

Responsibilities

  • Cross-Modal Attack Design: Create adversarial image-text pairings, manipulated screenshots, and synthetic media designed to bypass multimodal safety layers—where each input looks benign alone, but the combination is not.
  • Visual Exploit Discovery: Use your eye for visual context, framing, and implicit meaning to find harms that automated image classifiers and text-only filters miss—deepfakes, out-of-context imagery, steganographic prompt injection, OCR pipeline exploits.
  • Model Evaluation: Systematically evaluate and rank multimodal model outputs against calibrated severity rubrics to determine where safety guardrails are failing, over-refusing, or producing cross-modal inconsistencies.
  • Knowledge Loop: Document your attack vectors, failure patterns, and reproducible examples clearly—producing actionable intelligence reports that help model developers patch vulnerabilities.
  • Campaign Execution: Participate in structured red-teaming campaigns with defined deliverables, progress tracking via master trackers, and inter-annotator reliability targets.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service