Data Engineer

Fairway Independent Mortgage Corporation
8dRemote

About The Position

We are seeking a Data Engineer to architect and scale our next-generation big data ecosystem. In this role, you will leverage Databricks to design robust ETL pipelines, implement sophisticated data models, and integrate diverse streaming and API sources. As a technical leader, you will bridge the gap between complex engineering requirements and actionable business insights, ensuring our data infrastructure is both high-performing and compliant with global standards.

Requirements

  • 6+ years of proven experience architecting and developing enterprise-scale data solutions.
  • Extensive hands-on experience with the Azure Data Stack, including Synapse, Data Factory, Databricks, CosmosDB, and Azure SQL.
  • Expert-level command of SQL and Python (required); additional proficiency in JavaScript, R, VBA, or web technologies (HTML/CSS/JSON) is highly valued.
  • Demonstrated success in building and managing complex ETL/ELT pipelines within both Databricks and modern Business Intelligence environments (e.g., Power BI, Sisense, Tableau, or Domo).
  • Deep experience leveraging API calls to ingest, synchronize, and automate data flow between disparate third-party and internal systems.
  • Ability to bridge the gap between backend data engineering and front-end visualization, specifically optimizing Power BI for high-performance reporting.
  • A track record of designing, testing, and scaling automated data systems that drive operational efficiency, measurable cost savings, and improved business outcomes.
  • Experience working directly with business partners to translate high-level goals into technical workflows that enhance decision-making.

Responsibilities

  • Design and Orchestrate: Build and maintain high-performance, scalable ETL/ELT pipelines using Databricks to ensure seamless data flow.
  • Systems Integration: Architect robust integrations across a diverse ecosystem, including internal/external APIs, relational databases, and real-time streaming sources.
  • Data Refinement: Implement sophisticated transformation logic, data enrichment, and cleansing protocols to deliver "analytics-ready" datasets for downstream visualization.
  • Optimization & Modeling: Develop high-efficiency data models and schemas, applying advanced indexing and partitioning techniques to maximize query performance and scalability.
  • Ecosystem Management: Oversee and optimize cloud-based infrastructure, leveraging containerization and distributed computing to process massive datasets.
  • Advanced Analytics Support: Utilize big data frameworks to empower Machine Learning (ML) initiatives and collaborate with Data Science teams to deploy predictive models and statistical algorithms.
  • Automated Deployment: Design and implement "Zero-Touch" automation for CI/CD, encompassing build, test, and release processes to ensure rapid, reliable deployment.
  • Integrity & Compliance: Execute rigorous data governance, lineage tracking, and access controls to ensure total compliance with global privacy regulations and internal quality standards.
  • Documentation: Maintain comprehensive technical documentation for system configurations, data mapping, and architectural processes.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service