Software Engineer IV

OKSIClearwater, FL
10h

About The Position

We are seeking a hands-on Real-Time Computer Vision Engineer to develop and deploy onboard perception and vision-navigation systems for unmanned aerial systems (UAS). This role focuses on operational autonomy — object detection, tracking, recognition, and GPS-denied navigation — running in real time on embedded hardware. This is a deploy-to-flight role. Not modeling. Not simulation. Not offline research. You will build, optimize, integrate, and fly vision systems under real-world constraints.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Robotics, Electrical Engineering, or related field.
  • 4–8+ years of experience in real-time computer vision or autonomy systems.
  • Strong C++ and/or Python proficiency.
  • Experience deploying real-time vision algorithms to hardware platforms.
  • Solid understanding of: Object detection and multi-object tracking Feature detection and tracking Visual odometry or VIO Real-time system optimization
  • Experience with OpenCV and PyTorch or TensorFlow.
  • Must be eligible to obtain and maintain a DoD Secret clearance.
  • Willingness to support flight test operations and travel as needed.
  • To comply with U.S. Government export control regulations, including the International Traffic in Arms Regulations (ITAR), you must be a U.S. person as defined by law. A U.S. person includes a U.S. citizen, lawful permanent resident, or protected individual as defined by 8 U.S.C. 1324b(a)(3), or an individual otherwise eligible to obtain the required authorization from the U.S. Department of State.

Nice To Haves

  • Experience with UAS or airborne autonomy systems.
  • Experience with ROS/ROS2, PX4, MAVLink, or ArduPilot.
  • CUDA, TensorRT, or hardware acceleration experience.
  • EO/IR payload integration experience.
  • SLAM, GPS-denied navigation, or contested environment experience.
  • Experience supporting live flight testing.

Responsibilities

  • Develop and deploy real-time object detection, tracking, and recognition pipelines for airborne platforms.
  • Implement vision-based navigation capabilities (visual odometry, feature tracking, obstacle detection, VIO integration).
  • Optimize models for low-latency edge inference (Jetson, ARM, GPU acceleration).
  • Integrate perception systems with flight controls and autonomy stacks.
  • Support ground and flight testing; debug performance in live operational environments.
  • Implement sensor fusion across EO/IR, IMU, GPS, and telemetry inputs.
  • Transition prototype algorithms into reliable, production-ready systems.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service