Staff Autonomy Safety Engineer, Robot Safety

1X Technologies ASSan Carlos, CA
3dOnsite

About The Position

At 1X, we are building humanoid robots that work alongside humans to solve labor shortages and create abundance. Our robots operate in real-world, human environments—bringing AI out of simulation and into everyday life. We are hiring a Staff Autonomy Safety Engineer to lead safety assurance for machine learning–driven autonomy systems. You will ensure that perception, prediction, and decision-making systems operate safely under real-world conditions, degrade gracefully under uncertainty, and remain robust in complex, human-facing environments. This is a high-impact, deeply technical role focused on advancing AI safety for embodied systems. You will work at the intersection of autonomy, safety engineering, and real-world deployment, partnering closely with AI, robotics, and security teams. You will report to the Director of Robot Safety. This is a staff-level, hands-on technical role. We are looking for someone who can deeply analyze AI system behavior, define safety frameworks, and work directly with engineering teams to implement safeguards in production systems. You are expected to operate with high autonomy, influence cross-functional teams, and contribute directly to the safety of deployed robots.

Requirements

  • M.S. or higher in Engineering, Computer Science, Robotics, or related field
  • 10+ years of experience in AI/ML, robotics, or autonomous systems, with focus on safety-critical systems
  • Strong programming skills in Python and familiarity with ML frameworks
  • Experience working with real-world deployed learning-based systems
  • Deep understanding of: ML robustness generalization failures edge cases and failure modes
  • Experience analyzing safety risks in human-interacting systems
  • Ability to operate at the intersection of AI, systems engineering, and safety

Nice To Haves

  • Experience with autonomous vehicles or robotics safety
  • Familiarity with safety frameworks (e.g., FMEA, FTA, SOTIF, UL 4600)
  • Experience with adversarial ML or AI security
  • Background in formal methods or verification for ML systems
  • Experience defining runtime safety systems or guardrails for AI

Responsibilities

  • Identify and assess AI-specific hazards in end-to-end autonomy systems
  • Define and enforce safety constraints for AI-driven robot behavior involving humans, objects, and environments
  • Partner with Functional Safety to translate AI risks into system-level requirements and mitigations
  • Collaborate with AI teams to build runtime guardrails that validate and constrain AI-generated actions
  • Evaluate risks from: dataset bias distribution shift model drift rare and edge-case failure modes
  • Work with Cybersecurity to assess risks from: adversarial inputs prompt injection misuse scenarios
  • Provide input on residual risk, uncertainty, and confidence levels in AI behavior
  • Help define safety strategies for real-world deployment of autonomous systems

Benefits

  • Health, dental, and vision insurance
  • 401(k) with company match
  • Paid time off and holidays
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service