Data Architect - R01561308

BrillioChicago, IL
1d

About The Position

We are seeking an experienced Data Architect to design and build scalable, cloud-native data platforms supporting IoT, operational systems, and enterprise analytics. This role requires a hands-on technical leader who can architect modern data pipelines, enable real-time and batch ingestion, and design performant serving layers for APIs and applications. The ideal candidate blends deep AWS expertise, strong data modeling skills, and the ability to translate business requirements into scalable technical solutions.

Requirements

  • 12+ years of experience in data engineering and AWS data architecture.
  • Strong expertise with AWS Glue, Redshift, S3, Lake Formation, Aurora (Postgres/MySQL).
  • Proven experience building APIs or exposing curated datasets for web applications (preferably React or similar front-end frameworks).
  • Strong programming skills in Python and SQL.
  • Experience with data modeling, ETL/ELT frameworks, and performance optimization.
  • Working knowledge of Terraform/CloudFormation and CI/CD practices.
  • Understanding of data security, IAM policies, encryption, and governance frameworks.

Nice To Haves

  • Experience with Athena for ad-hoc analytics.
  • Familiarity with API Gateway, AppSync, or GraphQL APIs for serving data to front-end applications.
  • Knowledge of event-driven pipelines (Kinesis, EventBridge, kafka) and data quality monitoring.

Responsibilities

  • Architect ingestion pipelines for streaming and event-based data (IoT telemetry)
  • Implement time-window logic and event-to-state transformations
  • Design, build, and optimize data pipelines using AWS Glue, Lambda, and Step Functions for ingestion, transformation, and curation.
  • Develop and maintain data lakes (S3 + Lake Formation) and data warehouses (Redshift) to support analytics and visualization.
  • Integrate Aurora (PostgreSQL/MySQL) as a serving layer for APIs and dashboards.
  • Design APIs or direct query interfaces that expose curated datasets for React-based dashboards and web applications.
  • Define and implement data models, partitioning strategies, and schema evolution best practices for performance and scalability.
  • Implement data security policies, encryption, and role-based access controls using AWS IAM, KMS, Lake Formation, and Secrets Manager.
  • Ensure compliance with organizational data governance and privacy standards.
  • Maintain data cataloging, lineage, and access tracking within AWS Glue Data Catalog and Lake Formation.
  • Implement CI/CD pipelines for data workflows in collaboration with DevOps teams.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service