Staff Data Engineer

Laurel
3dRemote

About The Position

Laurel's infrastructure, data, and security team is lovingly named the Time Owls. We make Laurel's infrastructure more reliable, secure, and easy to use. Our team builds and maintains the tools, platforms, and automation that enable engineers across the company to work with AWS, Kubernetes, and other services easily and safely. We contribute to all applications in the company and act as both best-practice advocates and policy enforcers. This role sits at the intersection of infrastructure and AI. You'll work closely with our AI engineers and data scientists, focusing on making their work fast, efficient, scalable, and secure. You'll own data pipelines, model-serving infrastructure, and the systems that move data reliably through our platform — from ingestion through to production inference. If you are more interested in general infrastructure than data and ML systems, please check out our Platform Engineering role.

Requirements

  • Strong Python skills with experience building and deploying APIs to support AI/ML teams in delivering models to end users.
  • Experience with Airflow, Kubernetes, and AWS.
  • Experience with machine learning infrastructure and large language models.
  • Development experience with Terraform.
  • Experience working in startup environments.
  • Experience participating in a regular engineering on-call rotation.
  • Experience with PostgreSQL and MongoDB.
  • Experience with OLAP systems (Snowflake, Clickhouse, etc)
  • Experience with AI Specific Databases (Vector Databases for example)
  • Experience with event streaming platforms (e.g., Kafka, SQS).

Nice To Haves

  • Experience with Argo (Workflows and/or CD).
  • Experience with cdk8s.
  • Experience with OpenTelemetry and observability platforms (e.g., Observe, Datadog).
  • Experience with Spacelift.
  • Experience with CircleCI and/or GitHub Actions.
  • Experience with TypeScript and Go.

Responsibilities

  • Design and evolve Laurel's global large scale data platform architecture
  • Design and operate large-scale data pipelines processing millions of events per day.
  • Define best practices for data ingestion, transformation, processing, and storage.
  • Mentor and help grow team and our knowledge of big data best practices along with encouraging modern techniques
  • Work on Python API services (Django) to improve the serving of our AI infrastructure.
  • Build, improve, and maintain Airflow DAGs, and optimize MongoDB and PostgreSQL usage for data workloads.
  • Design and operate streaming data infrastructure, including migrations to modern event-driven architectures (e.g., Kafka).
  • Work on internal OLAP and DataLake systems (e.g. Clickhouse)
  • Develop and manage Kubernetes, Docker, and compute infrastructure for all of engineering.
  • Improve continuous integration, continuous deployment, and other automation.
  • Write backend code to help teams deliver functionality when there are deadlines or resource constraints.
  • Attend quarterly offsites (required travel), team standups, and other company meetings.
  • Raise the bar for engineering quality and advocate for best practices across the organization.

Benefits

  • Comprehensive medical/dental/vision coverage with covered premiums, 401(k), and additional benefits including wellness, commuter, and FSA stipends.
  • A smart, fun, collaborative, and inclusive team
  • Great employee benefits, including equity and 401K
  • Bi-annual, in-person company off-sites, in unique locations, to grow and share time with the team
  • An opportunity to perform at your best while growing, making a meaningful impact on the company's trajectory, and embodying our core values: understanding your "why," dancing in the rain, being your whole self, and sanctifying time
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service