Data Engineer II

WhoopBoston, MA
21h$125,000 - $175,000Onsite

About The Position

At WHOOP, we are on a mission to unlock human performance. WHOOP empowers members to perform at a higher level through a deeper understanding of their bodies and daily lives. WHOOP is seeking an experienced Data Engineer who thrives on innovation and takes ownership of building and evolving data systems at scale. In this role, you will design, build, and optimize scalable data pipelines and platforms that power our data driven insights. You will play a key role in shaping robust ELT architectures, improving reliability and performance, and influencing technical direction across the data platform. With a strong focus on modern AWS infrastructure and tooling such as Snowflake, DBT, Kafka, and Spark, you will help elevate our analytical and operational capabilities. If you are excited about using AI to improve developer productivity and drive meaningful impact, we want you to join our team.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience
  • 3-5 years of professional experience building and operating ETL/ELT pipelines in production environments
  • Strong proficiency in SQL and hands-on experience with modern data warehousing concepts and dimensional modeling
  • Professional experience using Python for data engineering, including writing clean, testable, and reusable code

Nice To Haves

  • Experience with DBT for data modeling, testing, and documentation is preferred
  • Experience with Spark and Kafka for batch or streaming data processing is preferred
  • Strong problem-solving skills, clear communication, and the ability to work independently while collaborating in an agile environment
  • Comfort using AI tools such as Copilot or ChatGPT to improve efficiency throughout the software development lifecycle

Responsibilities

  • Design, build, and operate scalable ELT pipelines using Python and PySpark, with a focus on reliability, performance, and maintainability
  • Own and improve batch and streaming data systems using Spark and Kafka, including monitoring and resolving production data issues
  • Develop and optimize Snowflake data models and DBT transformations to support analytics, experimentation, and trusted metrics
  • Partner with data scientists, analysts, and product teams to translate business requirements into well-designed data solutions
  • Contribute to the evolution of the data platform by improving observability, data quality, and engineering best practices
  • Leverage AI tools to accelerate development, improve code quality, and automate repetitive data engineering workflows
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service