Databricksposted 2 days ago
$142,200 - $204,600/Yr
Full-time • Entry Level
Mountain View, CA

About the position

Databricks is radically simplifying the entire data lifecycle, from ingestion to generative AI and everything in-between. We’re doing it cross-cloud with a unified platform, serving over 10k customers, processing exabytes of data/day on 15+ million VMs, and growing exponentially. The Lakeflow team is looking for recent PhD graduates. Lakeflow team includes products like Apache Spark™ Structured Streaming, Delta Live Tables (DLT), and Materialized Views. Apache Spark™ Structured Streaming is one the world’s most popular streaming engines. DLT makes it easy to build and manage reliable batch and streaming data pipelines that deliver high-quality data on the Databricks Lakehouse Platform. DLT helps data engineering teams simplify ETL (extract-transform-load) development and management with declarative pipeline development, automatic data testing, and deep visibility for monitoring and recovery. DLT optimizes pipeline execution by logical optimization through query transformations, and physical optimization such as instance type selection and vertical/horizontal autoscaling. Moreover, as part of DLT, we have a new catalyst optimization layer, Eenzyme, designed specifically to speed up the ETL process and make declarative ETL computation possible by incrementally computing and materializing the intermediate results. Enzyme can create and keep up-to-date a materialization of the results of a given query stored in a Delta table. Enzyme does this by using a cost model to choose between a variety of techniques that borrow from traditional literature on the maintenance of materialized views, delta-to-delta streaming, and manual ETL patterns commonly used by our customers. As a part of the LakeflowDLT team, there are opportunities to design and implement in many areas that leapfrog existing systems: Query compilation and optimization, Distributed query execution and scheduling, Vectorized engine execution, Resource Management, Transaction coordination, Efficient storage structures (encoding, indexes), Automatic physical data optimization.

Responsibilities

  • Design and implement query compilation and optimization.
  • Work on distributed query execution and scheduling.
  • Develop vectorized engine execution.
  • Manage resource allocation and optimization.
  • Coordinate transactions effectively.
  • Create efficient storage structures including encoding and indexes.
  • Implement automatic physical data optimization.

Requirements

  • PhD in databases or systems.
  • Knowledge of database systems, storage systems, distributed systems, and performance optimization.
  • Motivated by delivering customer value and influence.

Benefits

  • Comprehensive benefits and perks that meet the needs of all employees.
  • Eligibility for annual performance bonus.
  • Equity options.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service