Data Platform Engineer, Senior Staff

QualcommSan Diego, CA
5dOnsite

About The Position

We are hiring a Data Platform Engineer to design, build, and operate a modern, data platform with Databricks Lakehouse as a core foundation. This role is ideal for a senior engineer who excels in system design, platform innovation, and hands‑on execution. You will help design, implement and improve reliability of the data platform, influence architectural direction, and mentor engineers while working closely with data scientists, analysts, and application teams. This is a hands‑on, high‑impact role with strong ownership across architecture, implementation, reliability, security, and cost optimization. This role requires full-time onsite work (5 days per week) in either San Diego, CA or Boulder, CO.

Requirements

  • 7+ years of IT-related work experience with a Bachelor's degree in Computer Engineering, Computer Science, Information Systems or a related field. OR 9+ years of IT-related work experience without a Bachelor’s degree.
  • 5+ years of work experience with programming (e.g., Java, Python).
  • 3+ years of work experience with SQL or NoSQL Databases.
  • 3+ years of work experience with Data Structures and algorithms.
  • Proven experience designing complex, distributed data systems end‑to‑end
  • Strong system design skills across cloud infrastructure, data pipelines, storage, governance, and observability
  • Ability to evaluate and communicate architectural trade‑offs (scalability, reliability, cost, security, operability)
  • Demonstrated innovation mindset, such as: Modernizing or simplifying legacy data platforms Introducing new architectural patterns or platform capabilities Improving reliability, performance, or developer experience through design
  • 8+ years building and operating large‑scale cloud platforms
  • 5+ years in Data Platform, DevOps, SRE, or DataOps roles
  • Expert‑level experience with AWS, including networking, IAM, security, and multi-account environments
  • Strong hands‑on experience with Databricks Lakehouse Platform, including: Unity Catalog Delta Lake MLflow
  • Experience owning Databricks workspace architecture and governance
  • Deep experience with Terraform/Terramate (or similar IaC tools)
  • Proven experience building CI/CD pipelines (GitHub Actions, Jenkins)
  • Advanced Python and Bash scripting skills
  • Hands‑on experience managing Amazon EKS clusters and Helm deployments
  • Experience defining and operating SLIs/SLOs for platform reliability and data quality
  • Strong understanding of cloud security best practices
  • Experience supporting SOX, GDPR, or regulated environments
  • AWS Certified DevOps Engineer – Professional (or equivalent experience)
  • Ability to work independently and mentor team members
  • Strong communication skills to explain complex system designs to technical and non‑technical stakeholders

Nice To Haves

  • Experience with Acceldata, OpenTelemetry, or data observability tools
  • Experience with Fivetran Hybrid / HVR or enterprise ingestion tools
  • Experience working with Hashicorp vault or similar secret management tools
  • Policy‑as‑code experience using OPA
  • Experience working with static Code analysis tool like Sonar
  • Experience with data lineage or metadata management tools
  • Databricks serverless compute optimization and cost management
  • Experience driving platform strategy across multiple teams distributed globally

Responsibilities

  • Design and build scalable, distributed data platform systems
  • Own Databricks workspace architecture, governance, and lifecycle management
  • Implement infrastructure as code using Terraform/Terramate
  • Build and support end‑to‑end data pipelines, from ingestion to analytics and ML
  • Develop and maintain CI/CD pipelines for infrastructure, data pipelines, notebooks, SQL, and ML artifacts
  • Operate and optimize Amazon EKS clusters supporting data workloads
  • Define and monitor SLIs/SLOs for platform reliability and data quality
  • Drive innovation and continuous improvement across platform architecture and tooling
  • Lead incident response, on‑call rotations, and blameless post‑mortems
  • Implement strong security, compliance, and governance controls
  • Mentor engineers and influence platform standards and best practices
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service