Director, Data Engineering

Tokio Marine HCCHouston, TX
1dHybrid

About The Position

Tokio Marine HCC is a global industry-leading specialty insurance group, backed by the strength and stability of the Tokio Marine Group. Offering over 100 classes of specialty insurance, we empower clients to pursue opportunities confidently through our “Mind Over Risk” philosophy. More than an insurance company, we are an organization built on innovation, unity, and trust. At our core, we are Always Advancing, driven by innovation and an entrepreneurial spirit that keeps us moving forward. Our people are Experts in Tomorrow, using curiosity and smart working to anticipate what’s next. With a culture rooted in Reaching Out, we foster genuine collaboration and support, ensuring every individual has the opportunity to succeed and make a difference. We are seeking a highly experienced Director, Data Engineering, to lead enterprise data platform strategy and execution within the Corporate Data Office. This role provides strategic and hands-on technical leadership for the design, implementation, and optimization of modern, cloud-native data platforms. The Director is responsible for working with enterprise data architecture, leading large-scale cloud data initiatives, and ensuring secure, scalable, and cost-effective data movement across the organization. This includes ownership of AWS-based data infrastructure, Snowflake data warehouse/lakehouse environments, and Qlik Replicate (or similar change data capture technologies) supporting near-time and batch ingestion. This role operates at both strategic and technical levels—setting direction for enterprise data engineering standards while remaining deeply engaged in architectural design, performance optimization, governance implementation, and complex troubleshooting. The Director partners closely with business leaders, enterprise architecture, cybersecurity, analytics, actuarial, finance, underwriting, and risk functions to ensure data capabilities directly enable corporate objectives

Requirements

  • Minimum Bachelor’s degree in computer science, information systems, data engineering, or related field (or equivalent experience).
  • 12+ years of progressive experience in data engineering, data architecture, or enterprise data platforms.
  • 5+ years of leadership experience managing managers and technical teams in cross-functional enterprise environments.
  • Deep hands-on experience with AWS cloud architecture and services supporting large-scale data ecosystems.
  • Advanced hands-on experience with Snowflake, including architecture design, modeling, performance optimization, and security configuration.
  • Proven experience implementing and managing Qlik Replicate, Qlik Compose, or similar CDC/data integration platforms.
  • Demonstrated success delivering enterprise-scale data modernization initiatives.
  • Extensive experience with SQL and relational databases.
  • Experience building and optimizing end-to-end data pipelines supporting data lake, data warehouse, operational analytics, and AI/ML workloads.
  • Experience implementing CI/CD frameworks using GitHub, GitLab, Azure DevOps, or similar platforms.

Nice To Haves

  • 6+ years of hands-on dimensional, and analytic modeling experience in Conceptual, Logical and Physical Data Models
  • Knowledge of metadata management, data modeling, and related tools (Qlik Compose, Erwin or ER Studio or others) including Source-to-Target mapping working directly with the developers, data architect and engineers.
  • Design data models to ensure future business flexibility, ease of use and sustainable architecture.
  • Consider ELT/ETL and Reporting challenges in designing data models.
  • Hands-on experience in data requirements and technical requirements gathering,
  • Experience creating cross-functional documentation for multiple stakeholders and teams.
  • Experience in the Risk & Compliance domain
  • Google Cloud, Amazon Web Services, or Azure
  • CI/CD in GitHub, GitLab, or Azure DevOps
  • Terraform or other Infrastructure as Code technology.
  • Kubernetes
  • Qlik Replicate, Qlik Compose
  • PowerBI
  • Experience with Infrastructure-as-Code technologies (Terraform or equivalent).

Responsibilities

  • You will help others build code to extract raw data, coach the team on techniques to validate its quality, and apply your deep data knowledge to ensure the correct data is ingested across the pipeline.
  • You’ll also provide expertise on techniques to transform raw data into forms compatible with downstream data sources.
  • You will guide the development of data tools used to transform, manage, and access data.
  • You’ll also advise the team on writing and validating code to test the storage and availability of data platforms so that they’re more resilient.
  • You will oversee the implementation of performance monitoring protocols across data pipelines, coaching the team on building visualizations and aggregations to monitor pipeline health.
  • You’ll also coach others on implementing solutions and self-healing processes that minimize points of failure across multiple product features.
  • You will help others identify data governance needs, overseeing the design of data modeling and handling procedures to ensure compliance with all applicable laws and policies.
  • You’ll oversee data accessibility within your assigned pipelines.
  • You will prepare team members for meetings with appropriate stakeholders across teams and address concerns around data requirements by providing guidance on feature estimation.
  • You will help others assess data costs, access, usage, use cases, dependencies across products, and availability for business or customer scenarios related to one or more product features.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service