Software Engineer II - Data Platform

Latitude AIDetroit, MI
4h

About The Position

Latitude AI (lat.ai) develops automated driving technologies, including L3, for Ford vehicles at scale. We’re driven by the opportunity to reimagine what it’s like to drive and make travel safer, less stressful, and more enjoyable for everyone. When you join the Latitude team, you’ll work alongside leading experts across machine learning and robotics, cloud platforms, mapping, sensors and compute systems, test operations, systems and safety engineering – all dedicated to making a real, positive impact on the driving experience for millions of people. As a Ford Motor Company subsidiary, we operate independently to develop automated driving technology at the speed of a technology startup. Latitude is headquartered in Pittsburgh with engineering centers in Dearborn, Mich., and Palo Alto, Calif. Meet the team: As Latitude’s Data Platform team, our mission is to build the scalable, high-performance infrastructure that turns structured data into insight to drive Latitude’s business forward. We provide foundational solutions for handling, processing and visualizing large-scale data to ensure every team at Latitude—from Autonomy to Machine Learning to Fleet Operations—can measure what matters. We promote Data as a Product; this means applying rigorous software engineering discipline to data assets: versioning, public/private interfaces for data models, and a relentless focus on automated testing. To scale these principles across our org, we design self-service systems that prioritize continuous, iterative delivery, developer velocity, cost transparency and efficiency by default.

Requirements

  • Bachelor's degree in Computer Engineering, Computer Science, Electrical Engineering, Robotics or a related field and 2+ years of relevant experience, Master's degree, or PhD
  • Cloud & Infrastructure: Experience managing cloud infrastructure via Infrastructure as Code (IaC). You should be comfortable with Docker, Terraform, Kubernetes, Helm, and observability via Grafana or similar
  • Software Craftsmanship: Strong Python development skills with a focus on SOLID design principles. You have experience managing the full lifecycle of internal packages, from intake and design to release and maintenance
  • Automated Release Engineering (DevOps): A standard-bearer for CI/CD excellence. You understand how to build robust Jenkins or GitHub Actions pipelines that automate testing, linting, and deployment, ensuring a high-velocity but safe development environment
  • Data Fluency: Understanding of SQL and OLAP concepts. You are familiar with the architecture of Massively Parallel Processing (MPP) systems (e.g., BigQuery, Redshift, or ClickHouse) and understand how to optimize for both performance and cost
  • Privacy & Security by Design: Experience with end-to-end security and isolation. You understand the importance of least-privilege access, service account management, and automated data anonymization/governance
  • Distributed Orchestration and Streaming: Familiarity with data orchestration tools (e.g., Airflow, Dagster) and a conceptual understanding of distributed stream processing (e.g., the Apache Beam model). You should understand challenges like watermarks, stateful processing, and temporal consistency
  • The "Internal Customer" Mindset: You are a product-minded engineer. You have the empathy to understand the pain points of your fellow engineers and the drive to build tools that are actually useful. You maintain a high bar for quality while prioritizing high-impact delivery

Nice To Haves

  • Experience with dbt is a plus

Responsibilities

  • A High-Performance Cloud-Native Stack: Work with a blend of serverless cloud analytics tools and self-hosted Kubernetes deployments to serve a diverse range of internal stakeholders
  • Abstracted Data Frameworks: Develop internal tooling that enables engineers to self-serve data loading and testing. You’ll work on systems capable of handling high-throughput, low-latency streaming and massively parallel bulk loads scaling to hundreds of terabytes daily, with robust support for generic and customer-defined data unit and integration testing and monitoring. You’ll work closely with teams such as Latitude’s Autonomy Analytics team, among many others, to understand and generalize platform feature needs and provide guidance/documentation on usage
  • Open Source & Custom Tooling: We deploy and scale industry-leading data engineering and analytics tools. We lean into the open-source ecosystem, deploying and extending OS solutions such as dbt, Airflow and Superset to serve Latitude’s unique autonomy and ML use cases
  • A Rich Metadata Layer: Provide the automation and cataloging tools that serve as the backbone for discovery and tracing. You will play a key role in enabling a well-documented data front ready for both human exploration and LLM/RAG-based consumption

Benefits

  • Competitive compensation packages
  • High-quality individual and family medical, dental, and vision insurance
  • Health savings account with available employer match
  • Employer-matched 401(k) retirement plan with immediate vesting
  • Employer-paid group term life insurance and the option to elect voluntary life insurance
  • Paid parental leave
  • Paid medical leave
  • Unlimited vacation
  • 15 paid holidays
  • Daily lunches, snacks, and beverages available in all office locations
  • Pre-tax spending accounts for healthcare and dependent care expenses
  • Pre-tax commuter benefits
  • Monthly wellness stipend
  • Adoption/Surrogacy support program
  • Backup child and elder care program
  • Professional development reimbursement
  • Employee assistance program
  • Discounted programs that include legal services, identity theft protection, pet insurance, and more
  • Company and team bonding outlets: employee resource groups, quarterly team activity stipend, and wellness initiatives
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service