perception engineer

mossSan Francisco, CA
18hOnsite

About The Position

Join us as a key founding engineer. We’re a small, fast-moving team of 5, developing novel perception systems and algorithms to interpret the physical world in challenging, real-world environments. You will own the development and deployment of our 3D perception pipeline. This involves developing novel algorithms and multi-modal models (LiDAR, Camera, GPS & Environmental Data) for understanding farms and enabling autonomous robot operations. We move quickly, solve hard problems, and are excited by the reactions we receive from our customers. We're looking for both full time (in-person) roles and intern candidates for co-ops, summer, and/or part-time.

Requirements

  • Impressive real-world projects beyond the classroom (robotics, perception, mapping, autonomy, etc.)
  • Hands-on experience with 3D sensor data (LiDAR, radar, depth cameras)
  • Strong C++ (templates, smart pointers, STL containers, algorithms)
  • Experience training ML models on custom datasets (data curation, labeling, training/eval loops)
  • Experience with object detection, semantic segmentation, and/or classical CV / point-cloud methods (e.g., clustering, registration, tracking)
  • Experience working on object detection, semantic segmentation, or classical computer vision and point cloud methods

Responsibilities

  • Own the full lifecycle of our 3D perception pipeline (research, prototype, production, deployment, iteration)
  • Build multimodal models and pipelines that fuse LiDAR, cameras, GPS and metadata for 3D detection and analysis
  • Design robust algorithms for outdoor environments (harsh shadows, lighting shifts, motion blur, dust, severe occlusions)
  • Help build and maintain ML infrastructure for automated labeling, dataset management, training, and evaluation
  • Optimize and deploy models for real-time performance on edge hardware (latency, throughput, memory)
  • Explore new approaches like vision-language-action (VLA) models, imitation learning, and autonomy-oriented perception for robot tasks
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service