Data Engineer (Contractor, PST)

Funded.club
10d$70Remote

About The Position

We’re Blue River, a team of innovators driven to radically change agriculture by creating intelligent machinery. We empower our customers – farmers - to implement more sustainable solutions: optimize chemical usage, reimagining routine processes, and improving farming yields year after year. We believe that focusing on the small stuff – pixel-by-pixel and plant-by-plant - leads to big gains. By partnering with John Deere, we are innovating computer vision, machine learning, robotics and product management to solve monumental challenges for our customers. Our people are at the heart of what we do. Through cross-discipline collaboration, this mission-driven and daring team is eager to define the new frontier of agricultural robotics. We are always asking hard questions, rapidly iterating, and getting our boots in the field to figure it out. We won’t give up until we’ve made a tangible and positive impact on agriculture. We build the data backbone behind machines that are transforming agriculture. Blue River Technology — the team behind John Deere's See & Spray™ — uses computer vision and machine learning to help farmers reduce herbicide use and grow smarter. Our ML Platform team needs a Data Engineer to keep that intelligence flowing. Why Blue River? You'll work at the intersection of robotics, ML, and sustainability — backed by the scale of John Deere. This isn't data engineering in a vacuum. The pipelines you build directly power machines working in real fields, solving real problems.

Requirements

  • 2+ years in backend engineering / data infrastructure
  • Strong Python skills
  • Hands-on experience with ETL pipelines and data architecture
  • Solid grasp of relational and non-relational databases
  • AWS services (S3, DynamoDB, EC2, Lambda, ECR, SQS, SNS)
  • Docker + CI/CD (GitHub Actions or Jenkins)
  • Clear communicator who documents well and works cross-functionally
  • BS or MS Computer Science

Nice To Haves

  • Databricks / Apache Spark
  • Kubernetes in production
  • Terraform or CloudFormation

Responsibilities

  • Own data pipelines end-to-end — ingestion, quality, performance, and reliability
  • Optimise queries and shape data architecture across the ML development lifecycle
  • Build and maintain ETL workflows that feed annotation, exploration, and model training
  • Support infrastructure scalability across AWS (S3, DynamoDB, Lambda, SQS, and more)
  • Improve code quality through automation, testing, and peer reviews
  • Collaborate directly with roboticists, backend engineers, and platform teams
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service