About The Position

We're seeking a Software Engineer to bridge the critical gap between cutting-edge ML research and production-ready solutions. You'll transform research prototypes into robust, deployable systems that end users can confidently put into production. This role uniquely combines software engineering, DevOps practices, and ML solution delivery.

Requirements

  • Active TS/SCI with Poly clearance
  • 14+ years of software engineering experience
  • Strong programming skills in at least two of: C++, Java, Python, or GoLang
  • Solid understanding of DevOps practices and CI/CD pipelines
  • Experience with containerization (Docker) and orchestration (Kubernetes)
  • Experience with machine learning frameworks (PyTorch preferred)
  • Prior work in ML engineering or ML infrastructure
  • Ability to write clean, maintainable code with strong software engineering fundamentals
  • Experience taking projects from prototype to production
  • Strong communication skills for technical and non-technical audiences
  • Self-motivated with ability to work independently and collaboratively

Nice To Haves

  • ML & AI Technologies Familiarity with ML domains: Natural Language Processing, Computer Vision, Automated Speech Recognition, or Video Processing
  • Knowledge of model formats and optimization (ONNX, TensorRT)
  • Technical Stack Protocol Buffers (protobuf) and gRPC
  • NVIDIA technologies (CUDA, TensorRT, Triton Inference Server)
  • Signal processing techniques and libraries
  • Performance profiling and optimization tools
  • Professional Experience Experience supporting production ML systems
  • Background in high-performance computing and/or GPU programming

Responsibilities

  • Productionize ML Research: Transform research code and prototypes from our ML team into reliable, scalable solutions ready for end-user deployment
  • Build Diverse Solutions: Develop applications and services in C++, Java, or Python; create gRPC-based containerized solutions with clients in Java, Python, or GoLang
  • Own the Delivery Pipeline: Design and maintain CI/CD pipelines, ensuring smooth deployment from development to production
  • Deploy ML Infrastructure: Configure and optimize containers using NVIDIA Triton Inference Server for high-performance inference
  • Performance Engineering: Profile, tune, and optimize solutions for production workloads
  • Documentation & Best Practices: Create comprehensive user documentation and establish deployment best practices
  • Collaborate Cross-Functionally: Work directly with end users to understand requirements and with researchers to align development with real-world needs

Benefits

  • Our employees value the flexibility at CACI that allows them to balance quality work and their personal lives.
  • We offer competitive compensation, benefits and learning and development opportunities.
  • Our broad and competitive mix of benefits options is designed to support and protect employees and their families.
  • At CACI, you will receive comprehensive benefits such as; healthcare, wellness, financial, retirement, family support, continuing education, and time off benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service