At Red Hat, we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. The Red Hat AI Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers and maintainers of the vLLM project, and inventors of state-of-the-art techniques for model compression, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments. We are seeking an experienced Senior Software Engineer to build and release the Red Hat AI Inference Server. You will own the full lifecycle, from compiling vLLM wheels across multiple hardware backends and architectures, to packaging enterprise-grade container images, managing multi-cloud infrastructure, and validating LLM accuracy and performance across a growing matrix of models and hardware. You will be building and shipping a product that runs on some of the most powerful AI hardware in production today, working across the full stack from C++/CUDA kernel compilation to Kubernetes-orchestrated model serving on OpenShift. If you want to work at the intersection of systems engineering, release engineering, and AI infrastructure on one of the most popular open-source projects on GitHub, this is the role for you. Join us in shaping the future of AI!
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level