About The Position

Peraton is seeking a skilled and experienced Senior Databricks Data Engineer / Architect to support high-impact and on-going modernization, data and cloud initiatives. In this role, you will design and build scalable Databricks Lakehouse solutions, lead complex data transformations through initial data loads and incremental updates to create bronze, silver and gold copies of data through a data pipeline. The data will be accessed via Business Intelligence (BI) tools, APIs, data extracts and Machine Learning Pipelines. This individual will be responsible for building and optimizing data pipelines, integrating diverse data sources, and enabling scalable, high performance data solutions across the organization. This is an excellent opportunity for a growing professional eager to deepen their expertise in data engineering and data science within a collaborative environment. This is a part-time position.

Requirements

  • Minimum of 8 years with BS/BA; Minimum of 6 years with MS/MA; Minimum of 3 years with PhD
  • Experience in data engineering or a related field
  • Hands-on experience with Databricks
  • Cloud experience (AWS or Azure)
  • Familiarity with CI/CD pipelines for data projects and automated deployment strategies
  • Strong proficiency in SQL and Python for data processing and ETL development
  • Familiarity with Agile/Scrum methodologies and Jira for project tracking
  • Solid understanding of data modeling, warehousing, and performance optimization
  • Must be a U.S. Citizen or Lawful Permanent Resident (LPR) with at least three consecutive years of U.S. residency from the date of legal entry as an LPR
  • Must be able to obtain and maintain the required MBI clearance
  • Strong communication skills
  • Must be a US Citizen
  • Must be able to obtain and maintan the required agency clearance

Nice To Haves

  • Experience with Agile/Scrum methodologies
  • Enterprise data architecture experience
  • ML workflows exposure
  • Databricks / Cloud certifications
  • Knowledge of data governance, compliance, and security best practices
  • Active MBI Clearance

Responsibilities

  • Architect and design Development, Test, Pre‑production, Production, and Disaster Recovery (DR) environments in AWS using Databricks.
  • Install, configure, and troubleshoot Databricks PVC deployments across all environments, and provide documentation for future installations.
  • Develop and optimize ETL/ELT pipelines using SQL, Python, and PySpark.
  • Ingest, transform, and integrate data from diverse enterprise sources.
  • Define data models, data quality rules, and observability standards.
  • Lead performance tuning, cost optimization, and reliability enhancements for Databricks workloads.
  • Collaborate with architects, product owners, and business stakeholders to deliver data solutions.
  • Provide guidance on Databricks and cloud best practices.
  • Deliver peer training and mentorship as needed.
  • Contribute to defining and implementing the target-state Databricks architecture.
  • Support the development of the Data Governance framework and operating model for the target state.
  • Assist Cybersecurity with security assessments, ATO readiness, and related approvals.
  • Perform advanced Databricks performance optimization (e.g., Z‑Ordering, partitioning, bucketing) and monitor system performance.
  • Support self-service analytics and BI platforms.
  • Apply CI/CD and DataOps best practices to data engineering workflows.
  • Mentor engineers, conduct code reviews, and promote engineering excellence.
  • Build enterprise-scale data systems enabling analytics and reporting initiatives.
  • Document data pipelines, system architecture, and technical processes to ensure maintainability and knowledge transfer.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service