Data Engineer II , UTR PLanning Tech

AmazonNashville, TN
6h

About The Position

UTR Planning Tech builds the data infrastructure that powers labor planning across 12 Amazon last-mile and sort-center business lines. Our pipelines feed the planning systems that determine how Amazon staffs its delivery network, serving hundreds of sites and processing millions of data points daily. Our team is shifting from hand-coded, custom pipelines to an AI-native approach. We are building reusable frameworks where engineers and business users define what they need through configurations and natural language instead of writing custom code for every use case. AI agents handle orchestration, validation, and deployment. We are early in this transformation, which means you will not inherit a finished system. You will help define how we build, what patterns we standardize, and how AI fits into data engineering workflows. If you want to learn fast and have a hand in shaping the way a team works, this is that opportunity. We are hiring a Data Engineer to own significant portions of our data architecture, drive the design of AI-powered frameworks, and deliver data solutions that directly impact planning accuracy across Amazon's delivery network. Key job responsibilities UTR Planning Tech builds the data infrastructure that powers labor planning across 12 Amazon last-mile and sort-center business lines. Our pipelines feed the planning systems that determine how Amazon staffs its delivery network, serving hundreds of sites and processing millions of data points daily. Our team is shifting from hand-coded, custom pipelines to an AI-native approach. We are building reusable frameworks where engineers and business users define what they need through configurations and natural language instead of writing custom code for every use case. AI agents handle orchestration, validation, and deployment. We are early in this transformation, which means you will not inherit a finished system. You will help define how we build, what patterns we standardize, and how AI fits into data engineering workflows. If you want to learn fast and have a hand in shaping the way a team works, this is that opportunity. We are hiring a Data Engineer to own significant portions of our data architecture, drive the design of AI-powered frameworks, and deliver data solutions that directly impact planning accuracy across Amazon's delivery network.

Requirements

  • 3+ years of data engineering experience
  • 1+ years of developing and operating large-scale data structures for business intelligence analytics using ETL/ELT processes experience
  • 1+ years of developing and operating large-scale data structures for business intelligence analytics using OLAP technologies experience
  • 1+ years of developing and operating large-scale data structures for business intelligence analytics using data modeling experience
  • 1+ years of developing and operating large-scale data structures for business intelligence analytics using SQL experience
  • 1+ years of developing and operating large-scale data structures for business intelligence analytics using Oracle experience
  • Experience with data modeling, warehousing and building ETL pipelines

Nice To Haves

  • Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
  • Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)

Responsibilities

  • Design and own logical and physical data models for major datasets in the team's architecture. Create coherent models that drive physical design and serve multiple downstream consumers.
  • Build and optimize ETL pipelines for complex datasets using AWS services (Redshift, S3, EMR, Glue, Lambda, Athena) and Python-based orchestration (Airflow/MWAA). Your solutions will be testable, maintainable, and efficient.
  • Design and build configuration-driven data frameworks that replace repetitive custom code with reusable, declarative patterns for ingestion, transformation, and metric curation. Own the design of these frameworks, not just the implementation.
  • Build AI agent tooling and MCP-based interfaces that allow conversational agents to generate SQL, validate configurations, manage data quality rules, and execute pipeline operations through natural language.
  • Own ongoing data quality for datasets you build. Implement standardized data contracts, define SLAs, establish data certification processes, and automate manual quality processes.
  • Work with planning scientists, software engineers, BI engineers, and product managers to balance customer requirements with technical requirements. Help shape what we build, not just how.
  • Improve self-service access to data. Build analytical data models and tooling that reduce dependency on the DE team for common data access patterns.
  • Improve engineering processes: automate manual operations, establish monitoring and alerting standards, and drive code quality and dependency management practices.
  • Mentor engineers and interns. Train new team members on how team data solutions are constructed, how they operate, and how they fit into the broader architecture.
  • Participate in the interview process and help recruit for the team.

Benefits

  • health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage)
  • 401(k) matching
  • paid time off
  • parental leave
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service