Sr Data Engineer

TopgolfDallas, TX
3d

About The Position

The Senior Data Engineer is responsible for designing, building, and optimizing scalable data platforms and pipelines that support analytics, machine learning, and business intelligence initiatives. This role requires deep expertise in AWS Redshift, SageMaker AI, Quick Suite (Amazon QuickSight), Apache Airflow and Python, with a strong emphasis on performance, reliability, and data governance. Data Architecture & Engineering Design, develop, and maintain scalable data pipelines using Python and AWS-native services. Architect and optimize data warehouse solutions in AWS Redshift, including schema design, workload management, and performance tuning. Implement ELT/ETL processes to ingest, transform, and curate structured and semi-structured data. Ensure data integrity, quality, and security across all data platforms. Cloud & AWS Platform Development Develop and maintain cloud-native data solutions leveraging AWS services (e.g., Redshift, S3, Glue, Lambda, SageMaker, Kinesis, RDS, IAM). Optimize Redshift clusters for cost efficiency and high performance. Implement CI/CD pipelines and infrastructure-as-code practices for data systems. Machine Learning Enablement (SageMaker AI) Partner with data scientists to operationalize machine learning models using SageMaker AI. Build data pipelines that support model training, validation, and deployment. Implement feature engineering and feature store best practices. Monitor and optimize model performance in production environments. Business Intelligence & Reporting Enable and support analytics and reporting solutions using Amazon QuickSight (Quick Suite). Develop curated datasets and semantic layers to support self-service analytics. Ensure data models are structured to support performance and scalability in BI tools. Governance & Best Practices Establish and enforce data engineering standards, coding best practices, and documentation. Implement monitoring, logging, and alerting for data pipelines and warehouse performance. Ensure compliance with security, privacy, and regulatory requirements. Mentor junior engineers and provide technical leadership across data initiatives.

Requirements

  • 5+ years of experience in data engineering or related roles.
  • Deep experience with one or more cloud providers: AWS (Glue, S3, Redshift), Azure (Data Lake, Databricks), or GCP.
  • Advanced expertise in AWS Redshift or Snowflake (architecture, performance tuning, query optimization).
  • Hands-on experience with Amazon SageMaker AI for model development and deployment.
  • Advanced proficiency in Python & Airflow for data engineering.
  • Deep knowledge of SQL and data modeling techniques (dimensional modeling, star/snowflake schemas).
  • Experience with large-scale distributed data systems and cloud-native architectures.
  • Familiarity with DevOps practices, CI/CD, and infrastructure-as-code.

Responsibilities

  • Design, develop, and maintain scalable data pipelines using Python and AWS-native services.
  • Architect and optimize data warehouse solutions in AWS Redshift, including schema design, workload management, and performance tuning.
  • Implement ELT/ETL processes to ingest, transform, and curate structured and semi-structured data.
  • Ensure data integrity, quality, and security across all data platforms.
  • Develop and maintain cloud-native data solutions leveraging AWS services (e.g., Redshift, S3, Glue, Lambda, SageMaker, Kinesis, RDS, IAM).
  • Optimize Redshift clusters for cost efficiency and high performance.
  • Implement CI/CD pipelines and infrastructure-as-code practices for data systems.
  • Partner with data scientists to operationalize machine learning models using SageMaker AI.
  • Build data pipelines that support model training, validation, and deployment.
  • Implement feature engineering and feature store best practices.
  • Monitor and optimize model performance in production environments.
  • Enable and support analytics and reporting solutions using Amazon QuickSight (Quick Suite).
  • Develop curated datasets and semantic layers to support self-service analytics.
  • Ensure data models are structured to support performance and scalability in BI tools.
  • Establish and enforce data engineering standards, coding best practices, and documentation.
  • Implement monitoring, logging, and alerting for data pipelines and warehouse performance.
  • Ensure compliance with security, privacy, and regulatory requirements.
  • Mentor junior engineers and provide technical leadership across data initiatives.

Benefits

  • Free Play & 1/2 price food!
  • Health, dental, vision, 401(k) playmaker match, free mental well-being platform – and that’s just for starters for those who qualify.
  • View playmaker benefits here.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service