About The Position

FlexStaff is seeking a Senior Data Platform Engineer - Azure Databricks & API Integration (PHI) This is a temporary role, for a one -year assignment Part-Time - approximately 20 hours/week Only candidates who reside in New York State can be considered. The Senior Data Platform Engineer will lead the operation, stability, and evolution of our client's Azure Databricks environment and the API integrations that support enterprise data exchange involving Protected Health Information (PHI). This role is a key senior individual contributor within the Data Engineering organization and works closely with architecture, security, and application teams to ensure a secure, scalable, and compliant data platform. The ideal candidate brings deep hands-on experience with Azure Databricks, Azure-native services, and API development, along with a strong understanding of healthcare data security and compliance.

Requirements

  • Strong hands-on experience with Azure Databricks (administration and development)
  • Advanced proficiency in Python (PySpark) and/or Scala
  • Experience developing and supporting REST APIs (e.g., Python/FastAPI, .NET, or Java/Spring Boot)
  • Deep experience with Azure data and integration services, including: Azure Data Lake Storage (ADLS Gen2) Azure Data Factory or Synapse pipelines Azure Event Hubs or Service Bus
  • Experience with Azure DevOps, Git, and CI/CD automation
  • Infrastructure as Code experience using Terraform or ARM/Bicep
  • Strong understanding of cloud security, identity, and access management
  • Proven experience working with PHI and regulated healthcare data
  • Strong knowledge of HIPAA and data privacy best practices
  • Experience implementing audit logging, access controls, and governance in Azure
  • 7+ years of experience in data engineering, platform engineering, or backend engineering
  • 3–5+ years of hands-on Azure Databricks experience in production environments
  • 3+ years of experience designing and supporting APIs at enterprise scale
  • Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience

Nice To Haves

  • Experience with Delta Live Tables, Unity Catalog, or Databricks Workflows
  • Familiarity with FHIR, HL7, or healthcare interoperability standards
  • Experience with streaming architectures and near–real-time data processing
  • Azure certifications (e.g., Azure Data Engineer Associate, Azure Solutions Architect)
  • Experience in healthcare, life sciences, or other highly regulated industries strongly preferred

Responsibilities

  • Own and maintain the Azure Databricks workspace(s), including clusters, jobs, pools, and libraries
  • Design, build, and optimize Spark-based data pipelines using Databricks, Delta Lake, and PySpark/Scala
  • Implement best practices for cluster sizing, autoscaling, job orchestration, and cost optimization
  • Manage Databricks access controls using Azure AD, RBAC, and workspace permissions
  • Support CI/CD pipelines for Databricks assets using Azure DevOps and Git integration
  • Design, develop, and support secure RESTful APIs that enable data ingestion and consumption
  • Integrate Azure Databricks with upstream and downstream systems using APIs, messaging, or batch interfaces
  • Ensure API reliability through proper versioning, monitoring, logging, and alerting
  • Collaborate with application and integration teams to meet data access and performance requirements
  • Ensure all Databricks workloads and APIs meet HIPAA, Azure security, and enterprise compliance standards
  • Implement encryption at rest and in transit, key management (Azure Key Vault), and secrets handling
  • Enforce data protection controls including data masking, row/column-level security, and auditing
  • Partner with security, privacy, and compliance teams on audits, risk assessments, and remediation
  • Leverage Azure services such as ADLS Gen2, Azure Data Factory, Event Hubs, Azure Monitor, and Log Analytics
  • Monitor platform health, performance, and availability; troubleshoot and resolve production issues
  • Create and maintain technical documentation, standards, and operational runbooks
  • Provide technical guidance and mentorship within the Data Engineering team
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service