Senior Data Engineer & Architect

AdobeSan Jose, CA
2d

About The Position

The Adobe Experience Platform (AEP) Product Success Engineering team is hiring a Senior Data Engineer & Architect to help build and scale the infrastructure that powers product adoption insights, customer retention analytics, system performance monitoring, and operational intelligence across Adobe. In this role, you will design and build reliable, scalable, cloud-native data systems while contributing to platform-wide architectural direction and standards. You’ll collaborate closely with Data Scientists, Analytics Engineers, Solution Architects, and Product partners to translate evolving business needs into durable data solutions that support analytics and machine learning use cases. This opportunity is ideal for someone who enjoys hands-on engineering while influencing broader data architecture strategy.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or a related field — or equivalent practical experience.
  • 8+ years of experience in data engineering or software engineering, including building and operating production data platforms.
  • Experience contributing to data architecture decisions at the system or platform level.
  • Strong experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift), including schema design and performance tuning.
  • Experience building production-grade pipelines using orchestration frameworks (Airflow, Dagster, Prefect) and distributed processing tools (Spark, Beam).
  • Deep understanding of ETL/ELT patterns, dimensional modeling, and analytics dataset design.
  • Experience implementing data governance and quality practices, including lineage, validation, and role-based access controls.
  • Proficiency in Scale or Java, plus experience Python & SQL
  • Experience working with at least one major cloud platform (AWS, Azure, or GCP).
  • Experience defining data modeling standards, naming conventions, and data contracts.
  • Familiarity with event-driven and streaming systems (e.g., Kafka, Kinesis).
  • Ability to translate business questions into scalable data models and platform capabilities.
  • Ability to clearly explain data and architectural concepts to diverse audiences.

Nice To Haves

  • Experience in B2B SaaS, product analytics, or customer success environments.
  • Hands-on experience with modern data stack tools (dbt, Great Expectations, Atlan, Collibra).
  • Exposure to MLOps concepts such as feature pipelines and training datasets.

Responsibilities

  • Design & Evolve the Data Platform
  • Partner with engineering leadership to develop and evolve enterprise-scale data architecture across warehouses, lakes, and modern data platforms.
  • Develop logical and physical data models crafted to enable analytics and ML use cases, including dimensional models and optimized schemas.
  • Contribute to platform standards, reference architectures, and documentation across ingestion, transformation, governance, and analytics enablement.
  • Collaborate with Enterprise Architecture and Cloud Infrastructure teams to align with security, compliance, performance, and cost optimization guidelines.
  • Build Scalable Data Pipelines
  • Design, develop, and maintain reliable ETL/ELT pipelines integrating data from usage, clickstream, entitlement, cost, and support systems.
  • Build modular, reusable pipeline frameworks using orchestration tools such as Airflow, Dagster, or Prefect.
  • Optimize data processing workflows for performance, scalability, and cost efficiency using distributed systems (e.g., Spark, Beam).
  • Work with Data Scientists and Analytics Engineers to operationalize data products and ML-ready datasets.
  • Strengthen Governance, Quality & Reliability
  • Design and implement scalable frameworks for managing and improving data integrity, including cataloging, lineage tracking, access controls, and automated validation processes to ensure trusted, production-grade data.
  • Establish and continuously improve reliability standards for data services, defining and tracking key SLOs such as freshness, pipeline success rates, and availability through proactive monitoring and alerting.
  • Maintain clear metadata, documentation, and data definitions to improve discoverability, enable self-service analytics, and support consistent operational excellence across the platform.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service