Rokuposted 15 days ago
$186,000 - $340,000/Yr
Full-time • Senior
Santa Monica, CA
Professional, Scientific, and Technical Services

About the position

Roku is changing how the world watches TV. Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines.

Responsibilities

  • Develop best practices around cloud infrastructure provisioning, disaster recovery, and guiding developers on the adoption
  • Scale Big Data and distributed systems
  • Collaborate on system architecture with developers for optimal scaling, resource utilization, fault tolerance, reliability, and availability
  • Conduct low-level systems debugging, performance measurement & optimization on large production clusters and low-latency services
  • Create scripts and automation that can react quickly to infrastructure issues and take corrective actions
  • Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects
  • Collaborate and communicate with a geographically distributed team

Requirements

  • Bachelor's degree, or equivalent work experience
  • 8+ years of experience in DevOps or Site Reliability Engineering
  • Experience with Cloud infrastructure such as Amazon AWS, Google Cloud Platform (GCP), Microsoft Azure, or other Public Cloud platforms. GCP is preferred.
  • Experience with at least 3 of the technologies/tools mentioned here: Big Data / Hadoop, Kafka, Spark, Airflow, Presto, Druid, Opensearch, HA Proxy, or Hive
  • Experience with Kubernetes and Docker
  • Experience with Terraform
  • Strong background in Linux/Unix
  • Experience with system engineering around edge cases, failure modes, and disaster recovery
  • Experience with shell scripting, or equivalent programming skills in Python
  • Experience working with monitoring and alerting tools such as Grafana or PagerDuty, and being part of call rotations
  • Experience with Chef, Puppet, or Ansible
  • Experience with Networking, Network Security, and Data Security
  • AI literacy and curiosity. You have either 1) tried Gen AI in your previous work or outside of work, or 2) are curious about Gen AI and have explored it.

Benefits

  • Global access to mental health and financial wellness support and resources
  • Healthcare (medical, dental, and vision)
  • Life, accident, disability, commuter, and retirement options (401(k)/pension)
  • Paid time off for vacation and personal reasons
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service