Senior Data Engineer, Data Platform

Rocket MoneyWashington, DC
1d

About The Position

In this role, you will: Be an end-to-end owner of our data platform infrastructure, ensuring its security, usability, and performance. Work closely with analytics engineers, machine learning engineers, and software engineers to ensure the platform meets their needs. Make collaborative decisions about data tooling, pipeline design, and governance, and implement opinionated interfaces that facilitate easy and best-practice-aligned development for other teammates. Continuously improve failure rates for data sources by moving alerting “to the left”; i.e. catching and quarantining bugs/failures as close to the source as possible. Analyze patterns in how source data is generated, modeled, and consumed. Work with stakeholders to implement the best solution when new data sources are added. Take ownership of existing tools and workflows that may be functional but lack formal structure, documentation, or reliability measures. You'll assess what's working, identify gaps, and systematically improve systems to production-grade quality. Document everything you build knowing that you're creating the foundation others will build upon. Your runbooks, architecture decision records, and system documentation will become the institutional knowledge of our data platform. Proactively communicate with multiple stakeholders on platform capabilities, technical constraints, architectural decisions, project priorities, and platform support. Confidently juggle multiple projects and priorities in our fast-paced environment and work with stakeholders and platform teammates to ensure infrastructure changes, migrations, and improvements are delivered on schedule. Automate aggressively and deliberately; anything from GitHub Actions to Slack Workflows to minimize repetitive tasks. Effectively judge when the level of effort is appropriate and avoid over-engineering.

Requirements

  • You have 6+ years of experience working with data infrastructure, data engineering, or platform engineering within a fast-paced environment. You are highly proficient with SQL, Python, and cloud-based Infrastructure-as-Code (e.g. Terraform), and comfortable working with bash/shell scripting.
  • You have 4+ years of production experience with modern data stacks including data warehouses (BigQuery, Snowflake, or Redshift), orchestration tools, managed ingestion services, and infrastructure as code (Terraform, Pulumi, or CloudFormation).
  • You have 2+ years of experience building and maintaining production data pipelines, whether through ELT tools, custom applications, streaming systems, or event-driven architectures.
  • You've successfully "professionalized" data infrastructure before—taking scrappy, working systems and evolving them into reliable, well-documented, production-grade platforms. You can articulate what "production-ready" means in a data context.
  • You have a bias toward action and aren't paralyzed by imperfect solutions. You understand when "good enough for now with a plan to improve" beats "perfect but six months late." You ship incrementally and iterate based on feedback.
  • You're comfortable being the first person to tackle a problem. You don't need extensive mentorship or detailed tickets—you can take a high-level business need and figure out the technical approach. That said, you know when to ask for help and can articulate what you need.
  • You take ownership seriously—not just of writing code, but of outcomes. When you build something, you implement monitoring, write runbooks, create alerts, and ensure it can be maintained by others. You think about the full lifecycle of systems, not just initial delivery.
  • You have strong opinions, weakly held. You can make and defend architectural decisions, but you're open to feedback and willing to change course when presented with better information. You can disagree and commit.
  • You understand that "building from scratch" doesn't mean rejecting existing tools—it means thoughtfully selecting, configuring, and integrating managed services and open-source solutions to create a cohesive platform. You know when to build and when to buy.
  • You have experience making big changes to critical data infrastructure. You’ve successfully re-architected, migrated, or upgraded data tooling that has strict SLA’s, without significantly affecting downstream stakeholders.

Nice To Haves

  • You have led a data infrastructure migration or modernization project where you defined the vision, approach, and implementation.
  • You have created internal tools, frameworks, or CLIs that improved how teams work with data (not just one-off scripts).
  • You have established data platform best practices like CI/CD workflows, testing frameworks, or observability standards where none existed.
  • Expertise in cloud platforms and technologies analogous to our stack: Our stack: GCP (BigQuery, Datastream, Cloud Functions, Vertex AI, GCS), dbt, Fivetran, Postgres, Python, Terraform, Looker, Retool Analogous experience: AWS (Redshift, DMS, Lambda, SageMaker, S3) or Azure (Synapse, Data Factory, Functions), Snowflake, Airbyte/Stitch, infrastructure as code tools, BI platforms

Responsibilities

  • Be an end-to-end owner of our data platform infrastructure, ensuring its security, usability, and performance. Work closely with analytics engineers, machine learning engineers, and software engineers to ensure the platform meets their needs.
  • Make collaborative decisions about data tooling, pipeline design, and governance, and implement opinionated interfaces that facilitate easy and best-practice-aligned development for other teammates.
  • Continuously improve failure rates for data sources by moving alerting “to the left”; i.e. catching and quarantining bugs/failures as close to the source as possible.
  • Analyze patterns in how source data is generated, modeled, and consumed. Work with stakeholders to implement the best solution when new data sources are added.
  • Take ownership of existing tools and workflows that may be functional but lack formal structure, documentation, or reliability measures. You'll assess what's working, identify gaps, and systematically improve systems to production-grade quality.
  • Document everything you build knowing that you're creating the foundation others will build upon. Your runbooks, architecture decision records, and system documentation will become the institutional knowledge of our data platform.
  • Proactively communicate with multiple stakeholders on platform capabilities, technical constraints, architectural decisions, project priorities, and platform support.
  • Confidently juggle multiple projects and priorities in our fast-paced environment and work with stakeholders and platform teammates to ensure infrastructure changes, migrations, and improvements are delivered on schedule.
  • Automate aggressively and deliberately; anything from GitHub Actions to Slack Workflows to minimize repetitive tasks. Effectively judge when the level of effort is appropriate and avoid over-engineering.

Benefits

  • Health, Dental & Vision Plans
  • Life Insurance
  • Long/Short Term Disability
  • Competitive Pay
  • 401k Matching
  • Team Member Stock Purchasing Program (TMSPP)
  • Learning & Development Opportunities
  • Tuition Reimbursement
  • Unlimited PTO
  • Daily Lunch, Snacks & Coffee (in-office only)
  • Commuter benefits (in-office only)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service