The position involves developing and automating large scale, high performance data processing systems to enhance product experience. Responsibilities include building scalable Spark data pipelines, solving business problems through innovative engineering practices, and participating in all aspects of the Software Development Lifecycle (SDLC). This includes analyzing requirements, incorporating architectural standards into application design specifications, documenting application specifications, and developing or enhancing software application modules. The role also requires designing impactful telemetry and usage tracking solutions for BI tools, maturing modern data pipeline streams for analytics use-cases, and collaborating with data scientists and product owners to understand data requirements. Additionally, the position involves designing data models for optimal storage and retrieval, evaluating cloud enablement options for analytical tools, building audit compliant cloud solutions, and defining advanced vendor tool deployment solutions. The candidate will also conduct POCs for scalable analytic solutions, monitor infrastructure, troubleshoot technical issues, and perform API/Application Performance Testing.