About The Position

Patreon is a media and community platform where over 300,000 creators give their biggest fans access to exclusive work and experiences. We offer creators a variety of ways to engage with their fans and build a lasting business including: paid memberships, free memberships, community chats, live video, and selling to fans directly with one-time purchases. Ultimately our goal is simple: fund the creative class. And we're leaders in that space, with: $10 billion+ generated by creators since Patreon's inception 100 million+ free memberships for fans who may not be ready to pay just yet, and 25 million+ paid memberships on Patreon today. We're continuing to invest heavily in building the best creator platform with the best team in the creator economy and are looking for a Senior Software Engineer, Data to support our mission. This role is remote with optional in-person attendance in either the New York or San Francisco office. Expect to travel a handful of times per year for team building and collaboration offsites. The Data Engineering team at Patreon builds pipelines, models, and tooling that power both customer-facing and internal data products. As a Senior Software Engineer on the team, you’ll architect and scale the data foundation that underpins our creator analytics product, discovery and safety ML systems, internal product analytics, executive reporting, experimentation, and company-wide decision-making. We’re a team of pragmatic engineers who care deeply about data quality, reliable systems, and clear definitions. We partner closely with Product, Data Science, Infrastructure, Finance, Marketing, and Sales teams to ensure that the insights driving Patreon’s business and product are trustworthy, explainable, and actionable.

Requirements

  • 4+ years of experience in software development, with at least 2+ years of experience in building scalable, production-grade data pipelines.
  • Familiarity with SQL and distributed data processing tools like Spark, Flink, Kafka Streams, or similar.
  • Strong programming foundations in Python or similar language, with good software engineering design patterns and principles (testing, CI/CD, monitoring).
  • Familiar with modern data lakes (eg: Delta Lake, Iceberg). Familiar with data warehouses (eg: Snowflake, Redshift, BigQuery) and production data stores such as relational (eg: MySQL, PostgreSQL), object (eg: S3), key-value (eg: DynamoDB) and message queues (eg: Kinesis, Kafka)
  • Excellent collaboration and communication skills; comfortable partnering with non-technical stakeholders, writing crisp design docs, giving actionable feedback, and can influence without authority across teams.
  • Understanding of data modeling and metric design principles.
  • Passionate about data quality, system reliability, and empowering others through well-crafted data assets.
  • Highly motivated self-starter who thrives in a collaborative, fast-paced environment and takes pride in high-craft, high-impact work.
  • Bachelor’s degree in Computer Science, Computer Engineering, or a related field, or the equivalent

Responsibilities

  • Design, build, and maintain the pipelines that power all data use cases. This includes ingestion of raw data from production databases, object storage, and message queues, and vendors into our Data Lake, and building core datasets and metrics.
  • Develop intuitive, performant, and scalable data models (facts, dimensions, aggregations) that support product features, internal analytics, experimentation, and machine learning workloads.
  • Implement robust batch and streaming pipelines using Spark, Python, and Airflow.
  • Build pipelines adhering to standards for accuracy, completeness, lineage, and dependency management. Build monitoring and observability so teams can trust what they’re using.
  • Work with Product, Data Science, Infrastructure, Finance, Marketing, and Sales to turn ambiguous questions into well-scoped, high-impact data solutions.
  • Pay down technical debt, improve automation, and follow best practices in data modeling, testing, and reliability. Mentor peers earlier in their career within the team.

Benefits

  • salary
  • equity plans
  • healthcare
  • flexible time off
  • company holidays and recharge days
  • commuter benefits
  • lifestyle stipends
  • learning and development stipends
  • patronage
  • parental leave
  • 401k plan with matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service