NVIDIA logo

Senior Software Engineer - AI Inference

NVIDIAUs, California

$152,000 - $241,500 / year

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Overview

Schedule
Full-time
Career level
Senior-level
Compensation
$152,000-$241,500/year
Benefits
Paid Vacation

Job Description

NVIDIA is the platform upon which every new AI‑powered application is built. We are seeking a Senior Software Engineer – AI Inference to advance open‑source LLM serving by contributing directly to upstream inference engines like vLLM and SGLang-ensuring they run best‑in‑class on NVIDIA GPUs and systems-and by improving the underlying stack that enables high‑throughput, low‑latency inference at scale.

This is a hands-on role for an engineer who enjoys digging into performance bottlenecks, designing pragmatic runtime improvements, and shipping high‑quality changes that are broadly useful to the community and production deployments.

What you'll be doing:

  • Contribute features, fixes, and optimizations upstream to vLLM/SGLang: author PRs, participate in reviews, write benchmarks/tests, and help drive designs to completion.

  • Implement and optimize inference‑runtime capabilities: batching and scheduling policies, streaming, request lifecycle management, and KV‑cache efficiency (paging/sharding) to improve throughput and tail latency.

  • Profile and improve hot paths across layers-from Python orchestration to C++/CUDA kernels-using data to guide optimization work.

  • Improve multi‑GPU inference performance and reliability: parallelism strategies, communication patterns, and resource utilization across NVIDIA platforms.

  • Build and maintain performance and correctness regression tests to prevent slowdowns and ensure stable behavior across model and hardware configurations.

  • Collaborate with model, platform, and SRE teams to translate production requirements into upstreamable solutions with strong operability and maintainability.

What we need to see:

  • 5+ years building production software with solid systems engineering fundamentals and a track record of delivering performance or reliability improvements.

  • Experience with LLM inference/serving stacks (e.g., vLLM, SGLang) and an understanding of the tradeoffs that drive real production performance.

  • Strong programming skills in Python plus C++ and/or CUDA; ability to debug and optimize performance‑critical code.

  • Experience with profiling and performance investigation (microbenchmarks, flame graphs, GPU profiling) and a measurement‑driven mindset.

  • Familiarity with distributed systems concepts and concurrency (queues/schedulers, multi‑process/multi‑threading, scaling across GPUs/nodes).

  • Strong communication skills and comfort working with open‑source communities (issues, PR discussions, code review).

  • BS/MS in Computer Science, Computer Engineering, or related field (or equivalent experience).

Ways to stand out from the crowd:

  • Open‑source contributions to vLLM, SGLang, PyTorch, Triton, NCCL, Dynamo or adjacent serving/runtime projects.

  • Shipped performance work such as improved attention/KV cache efficiency, speculative decoding, scheduler improvements, quantization-aware serving, or streaming latency reductions.

  • Experience building reproducible benchmarking and performance regression infrastructure for latency/throughput.

  • Systems performance background spanning memory bandwidth, kernel fusion, PCIe/NVLink effects, and network fabrics (e.g., InfiniBand).

We are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward‑thinking and creative people in the world working for us. If you're creative and autonomous with a real passion for technology, we want to hear from you.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until April 18, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall

FAQs About Senior Software Engineer - AI Inference Jobs at NVIDIA

What is the work location for this position at NVIDIA?
This job at NVIDIA is located in Us, California, according to the details provided by the employer. Some roles may also include multiple work locations depending on the requirement.
What pay range can candidates expect for this role at NVIDIA?
Candidates can expect a pay range of $152,000 and $241,500 per year.
What employment applies to this position at NVIDIA?
NVIDIA lists this role as a Full-time position.
What experience level is required for this role at NVIDIA?
NVIDIA is looking for a candidate with "Senior-level" experience level.
What benefits are offered by NVIDIA for this role?
NVIDIA offers Paid Vacation for this position. Actual benefits may vary depending on the employer's policies and employment terms.
What is the process to apply for this position at NVIDIA?
You can apply for this role at NVIDIA either through Sonara's automated application system, which helps you submit applications 10X faster with minimal effort, or by applying manually using the direct link on the job page.