As an AI Enablement Data Engineer in the Data and AI Centre Of Excellence (DAICOE), you will be embedded within mixed project teams and be responsible for building robust, fault-tolerant data pipelines that collect, assemble, transform, and aggregate unorganized data distributed into modern data platforms. You will take full ownership of planning, decision-making, and execution of end-to-end data pipeline development, quickly embedding within project teams with minimal support required. You will compile and install database systems, write complex queries and data tests using dbt, SQL, and Python, scale solutions across distributed platforms, and implement disaster recovery systems. You will build the groundwork for data consumers (software or human) to easily retrieve needed data for evaluations and experiments. You will support operational data use cases such as moving large volumes of data across applications via operational data stores, data hubs, and data lakes, and build private/segregated data pipelines between specific applications. Our technology stack: Databricks, dbt Core, GitHub, Python, SQL, Spark, Snowflake, Iceberg, AWS, and PySpark
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level