3D Sparse Diffusion Specialist

World LabsSan Francisco, CA
7d$250 - $350

About The Position

At World Labs, we’re building Large World Models—AI systems that understand, reason about, and interact with the physical world. Our work sits at the frontier of spatial intelligence, robotics, and multimodal AI, with the goal of enabling machines to perceive and operate in complex real-world environments. We’re assembling a global team of researchers, engineers, and builders to push beyond today’s limitations in artificial intelligence. If you’re excited to work on foundational technology that will redefine how machines understand the world—and how people interact with AI—this role is for you. About World Labs: World Labs is an AI research and development company focused on creating spatially intelligent systems that can model, reason, and act in the real world. We believe the next generation of AI will not live only in text or pixels, but in three-dimensional, dynamic environments —and we are building the core models to make that possible. Our team brings together expertise across machine learning, robotics, computer vision, simulation, and systems engineering. We operate with the urgency of a startup and the ambition of a research lab, tackling long-horizon problems that demand creativity, rigor, and resilience. Everything we do is in service of building the most capable world models possible—and using them to empower people, industries, and society. Role Overview We’re looking for a Research Scientist focused on 3D & Sparse Diffusion to develop next-generation generative models that operate natively in 3D or over sparse, structured representations. This role is for someone excited about pushing the frontier of diffusion-based generative modeling beyond dense grids—into point clouds, implicit representations, multi-view observations, and hybrid 2D/3D formulations. This is a research-forward, hands-on role at the intersection of generative modeling, 3D representations, and scalable learning systems. You’ll work closely with other research scientists and engineers to invent, evaluate, and deploy diffusion models that power high-fidelity 3D generation, reconstruction, and editing in real-world product settings.

Requirements

  • 5+ years of experience in generative modeling, 3D learning, or related areas within machine learning research.
  • Hands-on experience designing or training diffusion models, with demonstrated work on 3D-native, sparse, or structured representations.
  • Strong background in modern 3D representations (e.g., point-based, implicit, volumetric, or hybrid) and their interaction with learning-based models.
  • Proficiency in Python and deep learning frameworks (e.g., PyTorch), with experience building research-grade training and evaluation code.
  • Solid understanding of probabilistic modeling, optimization, and large-scale training dynamics.
  • Experience publishing at top-tier venues or contributing to influential research or open-source projects in generative modeling or 3D.
  • Ability to operate independently in ambiguous research spaces, from idea formulation through experimental validation.
  • Strong scientific communication skills and a bias toward clarity, rigor, and reproducibility.
  • Enjoy collaborating with interdisciplinary teams spanning research, engineering, and product.

Responsibilities

  • Research and develop 3D-native and sparse diffusion models for generating and refining geometry, appearance, and scene structure.
  • Design diffusion processes over sparse or structured domains (e.g., point clouds, implicit fields, multi-view features, hybrid representations) with an emphasis on efficiency and fidelity.
  • Explore novel noise schedules, conditioning strategies, and sampling algorithms tailored to 3D and sparse data.
  • Build end-to-end training pipelines for large-scale diffusion models, including data preparation, supervision strategies, and evaluation metrics.
  • Collaborate with 3D reconstruction and modeling teams to integrate diffusion-based components into broader systems for generation, reconstruction, and editing.
  • Analyze model behavior and failure modes specific to sparse and 3D settings, and propose principled improvements to robustness and controllability.
  • Optimize training and inference performance, balancing sample quality, compute efficiency, and scalability.
  • Contribute to the team’s research output through publications, technical reports, and internal knowledge sharing.
  • Stay current with—and help shape—emerging research directions in generative modeling, diffusion, and 3D learning.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service