Red Hat logo

Senior Principal Software Engineer, AI Inference

Red HatBoston, Massachusetts

$189,600 - $312,730 / year

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Overview

Schedule
Full-time
Career level
Senior-level
Remote
Remote
Compensation
$189,600-$312,730/year
Benefits
Health Insurance
Dental Insurance
Vision Insurance

Job Description

Job Summary

At Red Hat, we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. The Red Hat AI Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers and maintainers of the vLLM project, and inventors of state-of-the-art techniques for model compression, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments.

We are seeking an experienced Senior Principal Software Engineer to build and release the Red Hat AI Inference Server. You will own the full lifecycle, from compiling vLLM wheels across multiple hardware backends and architectures, to packaging enterprise-grade container images, managing multi-cloud infrastructure, and validating LLM accuracy and performance across a growing matrix of models and hardware. You will be building and shipping a product that runs on some of the most powerful AI hardware in production today, working across the full stack from C++/CUDA kernel compilation to Kubernetes-orchestrated model serving on OpenShift. If you want to work at the intersection of systems engineering, release engineering, and AI infrastructure on one of the most popular open-source projects on GitHub, this is the role for you.

Join us in shaping the future of AI!

What you will do

  • Build and release vLLM wheels across multiple hardware backends and CPU architectures, managing complex native dependency chains including PyTorch, Triton, and other accelerator-specific libraries

  • Design and maintain CI/CD pipelines spanning multiple platforms including GitHub Actions, GitLab CI, and Buildkite for build, test, and release workflows

  • Manage and scale multi-cloud GPU infrastructure using Terraform and Ansible, including both bare-metal and Kubernetes-based compute runners

  • Own the model validation pipeline, orchestrating accuracy evaluation, performance benchmarking, tool-calling validation, and smoke testing across dozens of LLMs on both bare metal and OpenShift

  • Develop and maintain the Python tooling and automation that powers the build, packaging, validation, and release processes

  • Drive adoption of agentic AI and intelligent automation to streamline engineering workflows, accelerate debugging, and reduce toil across the team

What you will bring

  • 8+ years of software engineering experience with significant depth in build systems, release engineering, or infrastructure

  • Strong Python development skills with experience building well-tested, maintainable tooling and automation

  • Hands-on experience building and packaging Python projects with native compiled extensions, including familiarity with C++ and CUDA build toolchains, wheel packaging, and multi-architecture builds

  • Deep familiarity with container ecosystems, including Dockerfiles and Containerfiles, image registries, and container build pipelines

  • Understanding of LLM evaluation methodology, including accuracy benchmarks such as MMLU, GSM8K, and HellaSwag, as well as inference performance metrics like throughput and latency

  • Experience with CI/CD platforms such as GitHub Actions, GitLab CI, Tekton, or Buildkite

  • Solid understanding of release engineering practices including reproducible builds, artifact management, dependency pinning, and security scanning

  • Experience with infrastructure-as-code tools such as Terraform and Ansible, and managing cloud resources at scale

  • Working knowledge of Kubernetes and/or OpenShift for deploying and testing workloads

  • Enthusiasm for applying LLM-based agents and AI-assisted tools to automate engineering workflows, with a track record of identifying repetitive processes and replacing them with intelligent automation

  • Excellent communication skills, capable of interacting effectively with both technical and non-technical team members.

  • A Bachelor's or Master's degree in computer science, computer engineering, or a related field. A Ph.D. in an ML-related domain is a significant advantage.

The following is considered a plus:

  • Contributions to upstream open-source projects, particularly vLLM, PyTorch, or other AI/ML infrastructure

  • Experience with GPU-accelerated workloads and building software for heterogeneous hardware

  • Familiarity with LLM inference serving, model optimization, quantization techniques, or evaluation frameworks

  • Proficiency in C

#LI-MD2

#AI-HIRING

The salary range for this position is $189,600.00 - $312,730.00. Actual offer will be based on your qualifications.

Pay Transparency

Red Hat determines compensation based on several factors including but not limited to job location, experience, applicable skills and training, external market value, and internal pay equity. Annual salary is one component of Red Hat’s compensation package. This position may also be eligible for bonus, commission, and/or equity. For positions with Remote-US locations, the actual salary range for the position may differ based on location but will be commensurate with job duties and relevant work experience. 

About Red Hat

Red Hat is the world’s leading provider of enterpriseopen source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact.

Benefits●    Comprehensive medical, dental, and vision coverage●    Flexible Spending Account - healthcare and dependent care●    Health Savings Account - high deductible medical plan●    Retirement 401(k) with employer match●    Paid time off and holidays●    Paid parental leave plans for all new parents●    Leave benefits including disability, paid family medical leave, and paid military leave●    Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more! 

Note: These benefits are only applicable to full time, permanent associates at Red Hat located in the United States. 

Inclusion at Red HatRed Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village.

Equal Opportunity Policy (EEO)Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.

Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.

Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply. 

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall

FAQs About Senior Principal Software Engineer, AI Inference Jobs at Red Hat

What is the work location for this position at Red Hat?
This job at Red Hat is located in Boston, Massachusetts, according to the details provided by the employer. Some roles may also include multiple work locations depending on the requirement.
What pay range can candidates expect for this role at Red Hat?
Candidates can expect a pay range of $189,600 and $312,730 per year.
What employment applies to this position at Red Hat?
Red Hat lists this role as a Full-time position.
What experience level is required for this role at Red Hat?
Red Hat is looking for a candidate with "Senior-level" experience level.
Does Red Hat allow remote work for this role?
Yes, this position at Red Hat supports remote work, giving candidates the flexibility to work outside the primary office location.
What is the process to apply for this position at Red Hat?
You can apply for this role at Red Hat either through Sonara's automated application system, which helps you submit applications 10X faster with minimal effort, or by applying manually using the direct link on the job page.