Principal Data Engineer

FidelityDurham, NC
10dHybrid

About The Position

Position Description : Builds automation pipelines using DevOps concepts and Continuous Integration and Continuous Delivery (CI/CD) tools -- Jenkins, Stash, Concourse, and Artifactory. Develops Oracle SQL and PL/SQL stored procedures for relational databases. Develops new web based applications within cloud environments – Snowflake and Amazon Web Services (AWS). Designs, builds, and maintains reporting platforms. Writes codes with object-oriented programming languages -- Python/Spark. Performs shell scripting and scheduling using programming languages -- Python and Spark. Works in Agile environment executing projects using Kanban and SCRUM. Works with analysts to create profiles and rules for data quality using Informatica Data Quality IDQ tools and Address doctor. Primary Responsibilities: Responsible for designing, developing, testing, deploying, maintaining and improving customer-facing software solutions. Performs test automation frameworks and standard methodologies to build a reliable product. Designs and builds Extract, Transact, and Load (ETL) solutions while incorporating TEST automation frameworks in highly scalable distributed data processing systems. Delivers software in an Agile environment. Develops and maintains databases using principles of Database Warehousing and Data mart concepts. Confers with systems analysts and other software engineers/developers to design systems and to obtain information on project limitations and capabilities, performance requirements, and interfaces. Develops and oversees software system testing and validation procedures, programming, and documentation. Education and Experience : Bachelor’s degree in Computer Science, Engineering, Information Technology, Information Systems or a closely related field (or foreign education equivalent) and five (5) years of experience as a Principal Data Engineer (or closely related occupation) designing and developing highly scalable Business Intelligence (BI) and analytical solutions in on-premise and Cloud platforms in a financial services environment using data warehouse and Data mart methodologies. Or, alternatively, Master’s degree in Computer Science, Engineering, Information Technology, Information Systems, or a closely related field (or foreign education equivalent) and three (3) years of experience as a Principal Data Engineer (or closely related occupation) designing and developing highly scalable Business Intelligence (BI) and analytical solutions in on-premise and Cloud platforms in a financial services environment using data warehouse and Data mart methodologies. Skills and Knowledge : Candidate must also possess: Demonstrated Expertise (“DE”) designing and developing data warehouse applications according to business user requirements, using AWS services, Docker Container, Snowflake, Informatica, Oracle, PL/SQL, and Control-M; maintaining Continuous Integration/Continuous Delivery (CI/CD) pipelines for application code using Jenkins, Jenkinscore, Terracore, bitbucket, Github, and Concourse; developing Unix shell scripts; and creating Control-M jobs to automate and schedule end-to-end processes. DE developing real time Big Data solutions on Hadoop, using Hive/Impala, Kafka, Scala, Spark SQL, and Python to build highly scalable and data availability platform to the end user. DE participating and implementing all aspects of the Software Development Lifecycle (SDLC) in delivering innovative solutions according to financial services standards, security requirements, software development best practices, and Agile methodologies; and utilizing Alation and Collibra to efficiently catalog, analyze and govern data assets, ensuring data quality, compliance and informed decision-making across the organization. DE performing Test-Driven Development (TDD), using JUnit; conducting performance testing using JMeter; Data profiling, and Data mining; troubleshooting issues using Datadog for Observability; and creating stable and highly available solutions, using Service Level Objectives (SLO) and Service Level Indicators (SLI) concepts.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, Information Technology, Information Systems or a closely related field (or foreign education equivalent) and five (5) years of experience as a Principal Data Engineer (or closely related occupation) designing and developing highly scalable Business Intelligence (BI) and analytical solutions in on-premise and Cloud platforms in a financial services environment using data warehouse and Data mart methodologies.
  • Or, alternatively, Master’s degree in Computer Science, Engineering, Information Technology, Information Systems, or a closely related field (or foreign education equivalent) and three (3) years of experience as a Principal Data Engineer (or closely related occupation) designing and developing highly scalable Business Intelligence (BI) and analytical solutions in on-premise and Cloud platforms in a financial services environment using data warehouse and Data mart methodologies.
  • Demonstrated Expertise (“DE”) designing and developing data warehouse applications according to business user requirements, using AWS services, Docker Container, Snowflake, Informatica, Oracle, PL/SQL, and Control-M
  • Maintaining Continuous Integration/Continuous Delivery (CI/CD) pipelines for application code using Jenkins, Jenkinscore, Terracore, bitbucket, Github, and Concourse
  • Developing Unix shell scripts
  • Creating Control-M jobs to automate and schedule end-to-end processes.
  • DE developing real time Big Data solutions on Hadoop, using Hive/Impala, Kafka, Scala, Spark SQL, and Python to build highly scalable and data availability platform to the end user.
  • DE participating and implementing all aspects of the Software Development Lifecycle (SDLC) in delivering innovative solutions according to financial services standards, security requirements, software development best practices, and Agile methodologies
  • Utilizing Alation and Collibra to efficiently catalog, analyze and govern data assets, ensuring data quality, compliance and informed decision-making across the organization.
  • DE performing Test-Driven Development (TDD), using JUnit; conducting performance testing using JMeter; Data profiling, and Data mining; troubleshooting issues using Datadog for Observability; and creating stable and highly available solutions, using Service Level Objectives (SLO) and Service Level Indicators (SLI) concepts.

Responsibilities

  • Designing, developing, testing, deploying, maintaining and improving customer-facing software solutions.
  • Performing test automation frameworks and standard methodologies to build a reliable product.
  • Designing and building Extract, Transact, and Load (ETL) solutions while incorporating TEST automation frameworks in highly scalable distributed data processing systems.
  • Delivering software in an Agile environment.
  • Developing and maintaining databases using principles of Database Warehousing and Data mart concepts.
  • Conferring with systems analysts and other software engineers/developers to design systems and to obtain information on project limitations and capabilities, performance requirements, and interfaces.
  • Developing and overseeing software system testing and validation procedures, programming, and documentation.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service