Platform and Databricks DevOps Engineer The Opportunity: As a Databricks Platform DevOps Engineer at Booz Allen, you'll lead the design, automation, and reliable operation of Databricks Lakehouse environments that power mission-critical analytics, AI, and data initiatives for defense, intelligence, and civil clients. You'll drive CI/CD maturity, infrastructure as code, platform governance, and operational excellence to deliver secure, scalable, and cost-optimized Databricks platforms at enterprise scale. Join a high-impact team that accelerates national security outcomes through advanced DevOps practices on Databricks. Enable clients to achieve data and decision superiority by automating complex platform deployments, enforcing compliance, and continuously improving performance and reliability. What You'll Do: Manage account and workspace administration, including creating and managing workspaces, configuring settings, enabling features such as serverless compute or Previews, and handling account-level configurations via the account console. Oversee identity and access management, provisioning users, groups, and service principals via SCIM, assigning roles and entitlements, managing workspace assignments, and enforcing least-privilege access. Implement and govern Unity Catalog metastores, catalogs, schemas, tags, lineage, sharing, and fine-grained access controls to ensure data security, compliance, and discoverability across workspaces. Administer compute resources, assisting in creating and managing cluster policies, instance pools, job clusters, serverless options, Photon engine usage, and resource limits to standardize environments and control costs. Own infrastructure as code deployments such as Terraform and Databricks Terraform Provider, across multiple environments, such as dev, test, staging, and production, with strong governance and rollback capabilities. Design and maintain automation for notebooks, clusters including serverless and Photon-optimized, Delta Live Tables pipelines, Workflows and jobs, and Delta Lake configurations. Monitor platform health, performance, and costs, including accessing audit logs, billable usage dashboards, system tables, and operational metrics, setting budget alerts, and optimizing spend. Enforce security and compliance best practices, including encryption, private networking, IP access lists, credential management, and alignment with FedRAMP and DoD standards. Automate administrative tasks using Databricks REST APIs, Terraform Databricks Provider, Asset Bundles, scripting, including Python or Bash, and CI/CD integrations for repeatable, auditable operations. Troubleshoot complex issues, performing root-cause analysis, maintaining disaster recovery and backup strategies, and providing Tier-3 support to users, data engineers, and platform teams. Mentor junior administrators, conduct platform reviews, document procedures, and stay current with Databricks features such as Lakeflow or enhanced governance, to evolve client environments. Join us. The world can’t wait.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level