About The Position

Snowflake is about empowering enterprises to achieve their full potential — and people too. With a culture that’s all in on impact, innovation, and collaboration, Snowflake is the sweet spot for building big, moving fast, and taking technology — and careers — to the next level. We are hiring a Principal Engineer II to architect the core data processing engine of the Snowflake Data & AI Cloud. At Snowflake, we believe that high-performance, unified compute fabrics are the indispensable building blocks for Agentic AI. Autonomous agents require more than just models; they require a high-fidelity, low-latency state layer to reason, act, and persist context. This role is not about building traditional data processing pipelines or legacy ETL/ELT workflows; it is about building the core distributed systems and atomic primitives that make those agentic workflows possible. In this role, you will be a lead architect of the Snowflake Data Transformation Engine. You will design and implement the fundamental transformations infrastructure - Stateful Stream Processing Engines, Incremental View Maintenance Kernels, Materialization Internals, and the Distributed Orchestration Fabric. Our solid foundation supporting the seamless transition for enterprises between batch and streaming through Dynamic Tables is the starting point. You are building the systems that allow both data engineers and autonomous agents to process exabytes of data with sub-second state propagation and absolute transactional integrity across the Snowflake Data Cloud. As a Principal Engineer, you will own the technical vision for the data transformation processing layer. Your work on streaming internals and declarative state management will be the primary foundation for the next generation of cognitive computing, enabling Snowflake Cortex and our agentic ecosystem to operate on live, governed data at scale while putting the control of data freshness directly in the hands of our customers.

Requirements

  • 14+ years of industry experience building database kernels, distributed systems internals, or large-scale data processing engines.
  • Mastery of Systems Programming: Deep expertise in stateful stream processing, incremental view maintenance, distributed transactions, and query execution internals.
  • Infrastructure-First Mindset: You are a systems builder. You prefer building the Operating System and the Engine rather than the application or the end-user pipeline.
  • Distributed Systems Expertise: Proven track record of solving complex problems in consensus, replication, and high-concurrency environments at cloud scale.
  • Ecosystem Awareness: A deep understanding of the architectural limitations of traditional tools like Airflow or dbt, and a vision for how to solve those challenges through native Snowflake system design.
  • Collaborative Leadership: Ability to work in a globally distributed environment, collaborate across product and engineering boundaries, and mentor senior and junior engineers alike.
  • Agent-Native Vision: A clear understanding of how the Engine must evolve to support autonomous agentic loops, including low-latency context injection and stateful memory for LLM-driven applications.

Nice To Haves

  • Database Internals: Direct experience developing query optimizers, storage engines, or transaction managers.
  • Streaming Ecosystem Internals: Deep knowledge of the internal workings of Flink, Beam, or Spark Streaming and their limitations in a multi-tenant cloud environment.
  • Autonomic Infrastructure: Experience building zero-ops infrastructure services or self-healing distributed systems for public clouds.

Responsibilities

  • Architect Foundation Primitives for Agentic AI: Design the internal engines for Dynamic Tables, Streams, and Tasks, ensuring the underlying processing kernels provide the elastic, serverless foundation required for real-time agentic reasoning.
  • Build the Autonomic Processing Fabric: Develop the low-level infrastructure for automated triggers and incremental processing logic, allowing the Snowflake engine to proactively manage, optimize, and heal data states without manual intervention.
  • Innovate in System Internals: Drive the long-term roadmap for stateful streaming, moving the industry toward a freshness-first system architecture where data is always ready for model consumption.
  • Displace Legacy Orchestration Layers: Identify how to build superior, native processing capabilities directly within the Snowflake kernel to eliminate the complexity of external schedulers, simplifying the architectural scaffolding for our customers.
  • Engineer for Global Scale and Governance: Design and implement highly reliable, multi-tenant system internals that handle exabytes of data while maintaining Snowflake’s industry-leading standards for resource isolation, security, and distributed consistency.
  • Drive Technical Strategy for the AI Era: Provide technical leadership to senior management and multiple departments, influencing how Snowflake’s core compute fabric evolves to support the burgeoning Model Context Protocol (MCP) and autonomous agent ecosystems.
  • Drive Customer Success through Direct Engagement: Partner with Snowflake’s most strategic customers and field engineering teams to translate massive-scale architectural challenges into core engine requirements, ensuring our processing primitives meet the real-world demands of the global Data Cloud.
  • Ensure Operational Excellence: Take responsibility for the operational readiness of the services, meeting the strict commitments to our customers regarding reliability, availability, and performance.

Benefits

  • medical
  • dental
  • vision
  • life
  • disability insurance
  • 401(k) retirement plan
  • flexible spending & health savings account
  • at least 12 paid holidays
  • paid time off
  • parental leave
  • employee assistance program
  • other company benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service