Fanatics-posted 14 days ago
$144,000 - $234,000/Yr
Full-time • Senior
Remote • San Mateo, CA
Food Services and Drinking Places
Craft a resume that recruiters will want to see with Teal's resume Matching Mode

The Streaming Data Platform team is responsible for building, managing complex stream processing topologies using the latest open-source tech stack, build metrics and visualizations on the generated streams and create varied data sets for different forms of consumption and access patterns. We're looking for a seasoned Staff Software engineer to help us build and scale the next generation of streaming platforms and infrastructure at Fanatics Commerce.

  • Build data platforms and streaming engines that are real-time and batch in nature
  • Optimize existing data platforms and infrastructure while exploring other technologies
  • Provide technical leadership to the data engineering team on how data should be stored and processed more efficiently and quickly at scale
  • Build and scale stream & batch processing platforms using the latest open-source technologies
  • Work with data engineering teams and help with reference implementation for different use cases
  • Improve existing tools to deliver value to the users of the platform
  • Work with data engineers to create services that can ingest and supply data to and from external sources and ensure data quality and timeliness
  • 8+ years of software development experience with at least 3+ years of experience on open-source big data technologies
  • Knowledge of common design patterns used in Complex Event Processing
  • Knowledge in Streaming technologies: Apache Kafka, Kafka Streams, KSQL, Spark, Spark Streaming
  • Proficiency in Java, Scala
  • Strong hands-on experience in SQL, Hive, Spark SQL, Data Modeling, Schema design
  • Experience and deep understanding of traditional, NoSQL and columnar databases
  • Experience of building scalable infrastructure to support stream, batch and micro-batch data processing
  • Experience utilizing Apache Iceberg as the backbone of a modern lakehouse architecture, supporting schema evolution, partitioning, and scalable data compaction across petabyte-scale datasets
  • Experience utilizing AWS Glue as a centralized data catalog to register and manage Iceberg tables, enabling seamless integration with real-time query engines and improving data discovery across distributed systems
  • Experience working with Druid/StarRocks/Apache Pinot, powering low-latency queries, routine Kafka ingestion, and fast joins across both historical and real-time data.
  • Health, dental, and vision insurance
  • 401(k) plan with company match
  • Paid time off and holidays
  • Professional development opportunities
  • Flexible work arrangements
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service