About The Position

Cavnue, a Consor company, is redefining what’s possible for modern transportation infrastructure. As the digital infrastructure platform within Consor Engineers, a leading North American engineering and advisory firm, Cavnue combines advanced roadway technology with deep transportation design and delivery expertise to modernize how roads are operated and managed. We believe the future of roadways is connected, intelligent, and designed to actively improve safety, efficiency, and accessibility for all users. While vehicle technologies such as advanced driver assistance systems and automated driving continue to evolve rapidly, roadway infrastructure must evolve alongside them. Cavnue is closing that gap by developing and deploying smart road technologies that deliver a real-time digital twin of the roadway environment, leveraging AI-enabled, analytics, sensors, cameras, and integrated digital infrastructure to provide actionable operational insights. Together with Consor, we are accelerating the deployment of these solutions at scale, helping public and private clients modernize infrastructure and prepare for the next generation of mobility. With active projects across the United States, our work is setting a new standard for roadway performance today while laying the foundation for a safer, more efficient, and more connected transportation future. Join us as we shape the future of infrastructure where digital innovation meets engineering excellence to deliver measurable real-world impact. Role Overview We are looking for a Senior Software Engineer with deep expertise in database systems, data pipelines, and real-time data infrastructure. As a core member of our engineering team, you will own the design and evolution of our data layer, from schema design and query optimization to building scalable pipelines and APIs that power our platform. This is a fully remote, hands-on individual contributor role where you'll wear many hats, move fast, and have significant ownership over foundational systems. If you enjoy collaborating closely with a tight-knit team and having real ownership over the systems you build, we'd love to meet you.

Requirements

  • 5+ years of professional software engineering experience, with a meaningful focus on database systems and data infrastructure.
  • Deep expertise in PostgreSQL, including schema design, query optimization, indexing strategies, and performance tuning.
  • Hands-on experience with time-series databases (TimescaleDB/TigerDB, or similar).
  • Strong proficiency in Python and/or C++ for pipeline and backend development.
  • Experience building and operating data pipelines and data warehouses in a production environment.
  • Familiarity with GCP data products (e.g., BigQuery, Cloud Storage, Pub/Sub).
  • Experience with containerization (Docker) and orchestration (Kubernetes).
  • Comfortable working in a Linux development environment.
  • Strong communication skills — you can explain data models and architecture decisions clearly to both technical and non-technical stakeholders.
  • Demonstrated ability to work independently and make meaningful progress with minimal guidance.
  • BS in Computer Science, Engineering, or equivalent practical experience.

Nice To Haves

  • Experience with Redis for caching, real-time data access patterns, and streams.
  • Familiarity with Infrastructure as Code tools (Terraform) and CI/CD pipelines.
  • Experience with GitLab workflows and CI/CD pipelines.
  • Proficiency with GCP data products beyond the core stack, including BigQuery for analytics and large-scale querying.
  • Experience with agentic coding tools (e.g., Claude code, Cursor, or similar AI-assisted development environments).
  • Prior experience in an early-stage startup environment.
  • Exposure to ML/AI data workflows, feature stores, training data pipelines, or model serving infrastructure.

Responsibilities

  • Design, build, and maintain robust database schemas optimized for both transactional and time-series workloads, primarily using PostgreSQL and TimescaleDB/TigerDB.
  • Architect and manage scalable data pipelines, from raw ingestion through transformation and storage with a focus on reliability, performance, and maintainability.
  • Build and operate data warehouse infrastructure on GCP, ensuring data is well-organized, queryable, and cost-effective at scale.
  • Develop and maintain APIs that expose clean, performant interfaces for internal services and external consumers.
  • Implement real-time and streaming data solutions that handle high-throughput, low-latency workloads.
  • Own data quality, monitoring, and alerting across the data layer.
  • Design and maintain data warehouse architecture that supports system playback and integration testing, enabling engineers to replay real-world data scenarios against the full stack.
  • Collaborate closely with engineers, researchers, and hardware teams to understand data requirements and translate them into reliable infrastructure.
  • Write clear documentation for schemas, pipeline designs, data contracts, and operational runbooks.
  • Contribute to a healthy, high-trust engineering culture where good ideas can come from anywhere.

Benefits

  • Medical, dental, and vision benefits
  • Life insurance and disability insurance
  • 401(k) with company contribution
  • Paid Parental leave
  • Fertility and infertility benefits
  • Industry-competitive PTO
  • Learning and development opportunities
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service