Pinnacle Technical Resources-posted 3 days ago
$85 - $91/Yr
Senior
Phoenix, AZ
Professional, Scientific, and Technical Services

Client is currently seeking a Lead Cloud Data Platform Engineer to apply deep technical skills to create data products, develop AI-based automation tools, and build a world-class cloud analytics capability for client's Cyber Security Data Ecosystem on a Hybrid cloud. The successful candidate will continually innovate and pioneer the use of new technologies, and drive adoption of these amongst a team of talented data engineers. Candidate will be an integral part of client's migration from client's on-premises systems to the Google Azure cloud. Candidate will be a vital member of an agile team helping to lead the design and hands-on implementation of modern data processing capabilities. This is a visible role that allows you to share your knowledge and skills with other developers and product teams in a collaborative environment.

  • Implement and operationalize modern self-serve data capabilities on Google Cloud to ingest, transform, and distribute data for a variety of big data apps
  • Enable secure data pipelines to ensure data protection in transit and at rest
  • Automate data governance capabilities to ensure proper data observability throughout the data flows
  • Leverage AI/Agentic frameworks to automate data management, governance, and data consumption capabilities
  • Create repeatable processes to instantiate data processes that fuel analytics products and business decisions
  • Work with principal engineers, product managers, and data engineers to roadmap, plan, and deliver key data capabilities based on priority
  • Create the Future of Data: design and implement processes using the entire toolset within GCP to shape the future of data
  • 5 plus years of experience in data engineering including hands-on experience working with Hadoop and Google Cloud data solutions: creating/supporting Spark based processing, Kafka streaming, in a highly collaborative team
  • 3 plus years of experience with Data lakehouse architecture and design, including hands-on experience with Python, pySpark, Apache Kafka, Airflow, and SQL, GPC Cloud Storage, BigQuery, Data Proc, Cloud Composer
  • 2 plus years working with NoSQL databases such as columnar databases, graph databases, document databases, KV stores, and associated data formats
  • Public cloud certifications such as GCP Professional Data Engineer, Azure Data Engineer, or AWS Specialty Data Analytics
  • Proven skills with data migration from on-prem to a cloud native environment
  • Proven experience working with the Hadoop ecosystem capabilities such as Hive, HDFS, Parquet, Iceberg, and Delta Tables
  • Deep understanding of data warehouse, data cloud architecture, building data pipelines, and orchestration
  • Design and implementation of highly scalable and modular data pipelines with built-in data controls for automating data governance
  • Familiarity of GenAI frameworks such as Langchain and Langraph to develop agent-based data capabilities
  • Dev Ops and CI/CD deployments including Git, Jenkins, Docker, and Kubernetes
  • Web based UI development using React and Node JS is a plus
  • Medical insurance
  • Dental insurance
  • Vision insurance
  • 401K contributions
  • PTO
  • Sick leave
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service