Senior Enterprise Info Mgmt Platform Admin

Stanford Health CarePalo Alto, CA
3d$67 - $88Hybrid

About The Position

This role will primarily focus on highly automated processes of administration, monitoring, and support of the Enterprise Information Management (EIM) platform (currently Databricks on Azure with mesh connections to GCP and AWS), including: participation in management of the Cloud infrastructure, DevOps, SecOps, FinOps, etc. and evaluation/deployment of new technologies utilized by the EIM platform. This is a Stanford Health Care job. A Brief Overview This role will have a primary focus on day-to-day administration, monitoring, and support of the Enterprise Information Management (EIM) platform, participate in management of the Cloud infrastructure, and design, build, and operate process automations to manage the platform. This role will design and perform standard Data Ops support processes (user support, production operations, security management, etc.) and/or leads the design/development of new solutions (pipelines, process automations, user interfaces, infrastructure-as-code, etc.) using various cloud and AI technologies (including AI/LLM models) to deliver operational automations or data products across one or more domains. The ideal candidate will have a strong background in cloud technologies, particularly Azure Databricks, utilize systems thinking to holistically create solutions, and a passion for driving automation and efficiency in data management processes. You will work closely with various teams to facilitate the innovation process while ensuring data integrity, security, and compliance. This individual will lead projects, mentor team members, and assess/recommend strategies that advance the goals of the organization. This role will reside in Stanford Health's Enterprise Information Management department.

Requirements

  • BS/BA degree in information technology, information systems, business management, business analytics, business administration or a directly related field from an accredited college or university required.
  • FOUR (4) OR MORE YEARS OF EXPERIENCE AS A CLOUD ENGINEER, DATA ENGINEER, OR IN A SIMILAR ROLE WITH A FOCUS ON CLOUD TECHNOLOGIES REQUIRED.
  • Certifications in Azure (e.g., Azure Solutions Architect, Azure Data Engineer) and/or Databricks required.
  • Proven experience as a Cloud Engineer, Data Engineer, or in a similar role with a focus on cloud technologies.
  • Strong knowledge of Azure services, including Databricks, Azure Fabric, Azure Data Factory, Azure SQL Database, and Azure Storage.
  • Experience with a variety of cloud database services.
  • Proficiency in infrastructure-as-code tools (e.g., Terraform, ARM templates, Bicep).
  • Recent experience with architecture, design and implementation of complex, highly available and highly scalable solutions.
  • Experience with data operations, ETL processes, and data pipeline management.
  • Familiarity with data security best practices and compliance standards.
  • Proficiency with CI/CD for production deployments.
  • Current knowledge across the breadth of Databricks product and platform features.
  • Excellent problem-solving skills and the ability to work independently and collaboratively.
  • Strong communication skills, with the ability to convey technical concepts to non-technical stakeholders.
  • Experience with DevOps practices and tools (e.g., Azure DevOps, Git).
  • Knowledge of programming languages such as Python, PySpark, SQL, R, Scala.

Responsibilities

  • EIM Environment & Infrastructure Mgmt: Administer Cloud Platform workspaces: users/groups, workspace objects, permissions, cluster policies, pools, jobs, repos, and access to compute.
  • Lead development of automations and IaC for repeatable workspace provisioning and configuration.
  • Design and enforce platform standards: environment separation (dev/test/prod), workspace segmentation, cluster policy design, catalog structure, secure library/dependency management, etc.
  • Data Operations Management: Oversee and refine data platform and operational processes, ensuring high availability and performance of data pipelines.
  • Respond to incidents using defined support processes; assess post-incident learnings and drive preventive actions.
  • Analyze platform performance metrics vs SLA’s, vendor contracts, etc., identifying and driving improvements.
  • ⁠Data Pipelines/Ingestion: Design and implement scalable and efficient data pipelines for various data types (structured, semi-structured, and unstructured) from various sources, including databases, APIs, and third-party services.
  • Security and Compliance: Partner with Security and Compliance to define requirements and solutions as technology evolves.
  • Operate and support Catalog and access controls over catalogs/schemas/tables, external locations, storage credentials, etc. in accordance with SHC Policies.
  • Configure and maintain secure connectivity to platform services: ADLS Gen2, Key Vault, Azure Monitor/Log Analytics, private networking, and approved ingestion endpoints.
  • Assess controls to meet healthcare data security requirements for PHI/PII: encryption in transit/at rest, secure secret management, key rotation coordination, audit logging, etc. and conduct recuring access reviews.
  • FinOps: Monitor cost governance, usage reporting, cluster sizing guidance, scheduling, idle/overprovision reduction, and tagging/chargeback support.
  • Documentation and Training: Create comprehensive documentation of solutions and operations processes. Conduct training sessions for internal teams on operational protocols and best practices.
  • Collaboration: Work closely with cross-functional teams and lead discussions on platform capabilities and operational processes/standards.
  • Innovation: Evaluate and recommend new tools and technologies to enhance the cloud data platform's capabilities. Apply Systems Thinking to designs and solutions.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service