Data Engineer - Associate

Morgan StanleyNew York, NY
2d$90,000 - $150,000

About The Position

Design, develop, and maintain robust data pipelines, ETL processes, and large-scale data warehouse solutions to support enterprise-level reporting and analytics initiatives. Drive automation of data workflows and reporting processes to improve efficiency and reduce manual intervention, leveraging modern tools and AI/GenAI solutions where possible. Enable and optimize cloud-based data platforms for scalable data storage, processing, and reporting. Collaborate with business stakeholders, data scientists, and engineering teams to understand requirements and deliver high-quality data solutions. Optimize and troubleshoot SQL queries, data models, and ETL scripts for performance and reliability. Ensure data integrity, quality, and security across all stages of the data lifecycle. Document data flows, processes, and technical solutions for ongoing support and knowledge sharing. Contribute to a culture of innovation, continuous learning, and agile delivery within the team. We do it in a way that's differentiated - and we've done that for 90 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren't just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There's also ample opportunity to move about the business for those who show passion and grit in their work. To learn more about our offices across the globe, please copy and paste https://www.morganstanley.com/about-us/global-offices​ into your browser. Expected base pay rates for the role will be between $90,000 and $150,000 per year at the commencement of employment. Consequently, our recruiting efforts reflect our desire to attract and retain the best and brightest from all talent pools. We want to be the first choice for prospective employees. It is the policy of the Firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, sex stereotype, gender, gender identity or expression, transgender, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy, veteran or military service status, genetic information, or any other characteristic protected by law.

Requirements

  • 3-5 years of hands-on experience in data engineering, database development, or related roles.
  • Strong expertise in SQL (query optimization, data modelling, and performance tuning).
  • Proficiency with Snowflake (or similar cloud data platforms) and Python for data processing and automation.
  • Experience with ETL tools, scripting (Shell/Python), and building scalable data pipelines.
  • Solid understanding of data warehousing concepts and best practices.
  • Familiarity with Power BI for data visualization and reporting.
  • Experience with Apache Airflow or similar workflow orchestration tools.
  • Excellent problem-solving and analytical skills.
  • Strong communication skills and ability to work collaboratively in a global team environment.

Nice To Haves

  • Exposure to OLAP tools and multidimensional data modelling.
  • Experience or interest in leveraging GenAI, LLMs, or Copilot tools for data engineering, automation, or reporting use cases.

Responsibilities

  • Design, develop, and maintain robust data pipelines, ETL processes, and large-scale data warehouse solutions to support enterprise-level reporting and analytics initiatives.
  • Drive automation of data workflows and reporting processes to improve efficiency and reduce manual intervention, leveraging modern tools and AI/GenAI solutions where possible.
  • Enable and optimize cloud-based data platforms for scalable data storage, processing, and reporting.
  • Collaborate with business stakeholders, data scientists, and engineering teams to understand requirements and deliver high-quality data solutions.
  • Optimize and troubleshoot SQL queries, data models, and ETL scripts for performance and reliability.
  • Ensure data integrity, quality, and security across all stages of the data lifecycle.
  • Document data flows, processes, and technical solutions for ongoing support and knowledge sharing.
  • Contribute to a culture of innovation, continuous learning, and agile delivery within the team.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service