Build and maintain data pipelines that process high -volume subscriber data. Work with upstream systems to collect raw data and prepare it for downstream consumption. Design table structures and develop ETL workflows in Databricks and Snowflake. Develop automated Data Quality checks and enforce data reliability standards. Use Airflow for orchestration and schedule management. Tune SQL and Spark jobs for performance at large scale. Deploy schema changes using Schema Change or similar tools. Partner with analytics, infrastructure, and product teams in a fast -paced environment. Support both net -new development and enhancements to existing pipelines.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Career Level
Mid Level
Number of Employees
51-100 employees