Comcastposted 15 days ago
Reston, VA
Telecommunications

About the position

FreeWheel, a Comcast company, provides comprehensive ad platforms for publishers, advertisers, and media buyers. Powered by premium video content, robust data, and advanced technology, we're making it easier for buyers and sellers to transact across all screens, data types, and sales channels. As a global company, we have offices in nine countries and can insert advertisements around the world. DUTIES: Provide technical leadership within a team responsible for creating and maintaining large scale Big Data systems and Data Environments, used to ingest and process large data sets and provide actionable recommendations using data warehousing and business intelligence; use Linux OS; design, develop, test, and maintain AdTech software that extracts, transforms, and loads large volumes of data; develop software systems in an Agile development environment and on big data platforms, including Apache Spark, AWS, Flink, and Hadoop, using SQL, Presto, Scala, GoLang, Java, and Python; use AWS services including EC2, Lambda, S3, and Route 53; use Datadog and Grafana for monitoring; use Jenkins for CI/CD; develop Microservices API systems to support overall product development; store relational data in MySQL to support API and data processing applications; debug functional and performance issues on big data platforms and software modules; concurrently execute data processing software using multithreading; write code and scripts to extract MRM ad logs from FreeWheel Big Data platforms, and load MRM ad logs and campaign data; perform audience targeting; ingest audience, identity, and segment data; create dashboards and monitors on Datadog to ensure 24x7 availability of critical software deployments; build new software products and web frontend frameworks; analyze product specifications, write technical specs, create monitoring dashboards, develop test suites, design workflows, and setup database schemas and tables; interface with global engineering, operations, services, and business operations teams to execute proof of concepts and incorporate new requirements; improve system performance and ensure availability and scalability of services; provide production support for data processing systems running on AWS cloud and Snowflake; troubleshoot data processing problems running on distributed systems; and guide and mentor junior-level engineers. Position is eligible to work remotely within normal commuting distance of the worksite.

Responsibilities

  • Provide technical leadership within a team responsible for creating and maintaining large scale Big Data systems and Data Environments.
  • Ingest and process large data sets and provide actionable recommendations using data warehousing and business intelligence.
  • Design, develop, test, and maintain AdTech software that extracts, transforms, and loads large volumes of data.
  • Develop software systems in an Agile development environment and on big data platforms, including Apache Spark, AWS, Flink, and Hadoop.
  • Use SQL, Presto, Scala, GoLang, Java, and Python.
  • Use AWS services including EC2, Lambda, S3, and Route 53.
  • Use Datadog and Grafana for monitoring.
  • Use Jenkins for CI/CD.
  • Develop Microservices API systems to support overall product development.
  • Store relational data in MySQL to support API and data processing applications.
  • Debug functional and performance issues on big data platforms and software modules.
  • Concurrently execute data processing software using multithreading.
  • Write code and scripts to extract MRM ad logs from FreeWheel Big Data platforms, and load MRM ad logs and campaign data.
  • Perform audience targeting; ingest audience, identity, and segment data.
  • Create dashboards and monitors on Datadog to ensure 24x7 availability of critical software deployments.
  • Build new software products and web frontend frameworks.
  • Analyze product specifications, write technical specs, create monitoring dashboards, develop test suites, design workflows, and setup database schemas and tables.
  • Interface with global engineering, operations, services, and business operations teams to execute proof of concepts and incorporate new requirements.
  • Improve system performance and ensure availability and scalability of services.
  • Provide production support for data processing systems running on AWS cloud and Snowflake.
  • Troubleshoot data processing problems running on distributed systems.
  • Guide and mentor junior-level engineers.

Requirements

  • Bachelor's degree, or foreign equivalent, in Computer Science, Engineering, or related technical field.
  • Five (5) years of experience developing software in an Agile development environment using SQL, Presto, Scala, GoLang, Java, and Python on Big Data platforms including Apache Spark, AWS, Flink, and Hadoop.
  • Experience using Linux OS.
  • Experience using AWS services including EC2, Lambda, S3, and Route 53.
  • Experience using Datadog or Grafana for monitoring.
  • Experience using Jenkins for CI/CD.
  • Experience storing relational data in MySQL to support data processing applications.
  • Experience debugging functional and performance issues on big data platforms and software modules.

Nice-to-haves

  • Experience with Agile Environments.
  • Familiarity with Apache Spark and Presto.

Benefits

  • Comprehensive benefits options to support physical, financial, and emotional well-being.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service