About The Position

Generative AI is redefining creativity. Ensuring that these systems are safe, controllable, and respectful of intellectual property is one of the most important open research challenges in the field. The Adobe Firefly Applied Science & Machine Learning team is building next-generation multimodal guardrail systems for building safe and compliant image, video, and audio generative models powering Firefly.com. Our goal extends beyond reactive blocking ; we are developing model-level guidance mechanisms that proactively steer generation away from IP-violating concepts while preserving creative intent and usability. We are seeking a P 4 0 Applied Scientist with strong multimodal depth and research instincts to help define and dev e lop the frontier of IP-aware generative modeling. This role sits at the intersection of generative model alignment, multimodal reasoning, and large-scale inference systems. Research Areas You Will Drive Inference-Time Alignment & Optimization Research and implement inference-time control techniques (guided decoding, constrained sampling, classifier guidance, reward-based steering). Optimize large multimodal systems for low-latency, production-scale deployment without sacrificing alignment quality. Identify and mitigate failure modes in generative pipelines at scale. Rapid Scientific Experimentation Develop and implement rigorous experiments to evaluate trade-offs between creativity, fidelity, and IP safety. Develop new evaluation methodologies and benchmarks for multimodal IP compliance. Contribute novel technical insights that may lead to publications or internal intellectual property. Vision-Language & Multimodal Reasoning Advance the use of Vision-Language Models (VLMs) and multimodal foundation models for semantic IP understanding. Explore joint reasoning between perception and generation systems to enable real-time steering. Experiment how various techniques, e.g. multimodal embeddings , cross-attention mechanisms, etc. , can be used for safety-aware inference. Multimodal IP-Aware Generative Modeling Develop novel approaches for integrating IP constraints directly into generative model behavior. Investigate controllable generation techniques that shift models from post-hoc blocking toward guided, alignment-aware synthesis. Develop training and fine-tuning strategies that embed guardrail signals into model representations.

Requirements

  • PhD or MS in Computer Science, Machine Learning, AI, or related field.
  • 5+ years of experience in applied ML or generative AI research (industry or academia).
  • Strong background in large-scale generative models (diffusion models, multimodal transformers, autoregressive systems).
  • Deep experience with model fine-tuning, alignment strategies, and representation learning.
  • Expertise in Vision-Language Models or multimodal foundation models.
  • Proficiency in Python and modern ML frameworks (e.g., PyTorch ), with experience of training and deploying large models.
  • Strong experimental development and statistical evaluation skills.
  • Experience analyzing complex failure modes in multimodal systems.
  • Understanding of large-scale inference systems and production ML constraints.
  • Ability to navigate open-ended r esearch spaces a nd ident ify high-leverage problems.
  • Demonstrated ability to use AI coding tools and AI-assisted development workflows to rapidly prototype, experiment, and scale research impact.
  • Comfort operating in an AI-augmented development environment, using generative tools to increase iteration speed, code quality, and research throughput.
  • Ability to combine scientific rigor with high-velocity execution.
  • Experience working in cross-functional research-to-product environments.
  • Ability to clearly communicate complex scientific ideas to diverse collaborators.

Nice To Haves

  • Research contributions in controllable generation, alignment, AI safety, or multimodal learning.
  • Publications in leading conferences (CVPR, ICCV, NeurIPS, ICML, ICLR, SIGGRAPH) or equivalent industry impact.
  • Experience deploying generative models to large user bases.
  • Background in safety evaluation frameworks , explainability or adversarial robustness

Responsibilities

  • Architect and evolve the Firefly IP Guard pipeline across first-party and third-party generative models.
  • Collaborate with research scientists, ML engineers and other key partners like applied ethics, legal etc., to translate scientific advances into deployed systems.
  • Communicate with other non-technical collaborators to help drive awareness about the implemented technologies.
  • Drive end-to-end experimentation , from hypothesis formulation through model implementation and large-scale evaluation.
  • Contribute to the broader research strategy around generative AI alignment and safety within Adobe.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service