
Senior Software Engineer - Parallel Computing Systems
$184,000 - $287,500 / year
Automate your job search with Sonara.
Submit 10x as many applications with less effort than one manual application.1
Reclaim your time by letting our AI handle the grunt work of job searching.
We continuously scan millions of openings to find your top matches.

Job Description
Do you have expertise in CUDA kernel optimization, C++ systems programming, or compiler infrastructure? Join NVIDIA'snvFuserteam to build the next-generation fusion compiler that automatically optimizes deep learning models for workloads scaling to thousands of GPUs! We're looking for engineers who excel at parallel programming and systems-level performance work and want to directly impact the future of AI compilation.
The Deep Learning Frameworks Team @ NVIDIA is responsible for buildingnvFuser, an advanced compiler that sits at the intersection of compiler technology and high-performance computing. You'll work closely with the PyTorch Core team and collaborate withLightning-AI/Thunder, which integratesnvFuserto accelerate PyTorch workloads. We collaborate with hardware architects, framework maintainers, and optimization experts to create compiler infrastructure that advances GPU performance, developing manual optimization techniques into systematic, automated compiler optimizations.
What you'll be doing
As annvFuserengineer, you'll work on exciting challenges in compiler technology and performance optimization! You'll design algorithms that generate highly optimized code from deep learning programs and build GPU-aware CPU runtime systems that coordinate kernel execution for maximum performance. Working directly with NVIDIA's hardware engineers, you'll master the latest GPU architectures while collaborating with optimization specialists to develop innovative techniques for emerging AI workloads. From debugging performance bottlenecks in thousand-GPU distributed systems to influencing next-generation hardware design, we push the boundaries of what's possible in AI compilation.
What we need to see
MS or PhD in Computer Science, Computer Engineering, Electrical Engineering, or related field (or equivalent experience).
4+ years advanced C++ programming with large codebase development, template meta-programming, and performance-critical code.
Strong parallel programming experience with multi-threading, OpenMP, CUDA, MPI, NCCL, NVSHMEM, or other parallel computing technologies.
Shown experience with low-level performance optimization and systematic bottleneck identification beyond basic profiling.
Performance analysis skills: experience analyzing high-level programs to identify performance bottlenecks and develop optimization strategies.
Collaborative problem-solving approach with adaptability in ambiguous situations, first-principles based thinking, and a sense of ownership.
Excellent verbal and written communication skills.
Ways to stand out from the crowd
Experience with HPC/Scientific Computing: CUDA optimization, GPU programming, numerical libraries (cuBLAS, NCCL), or distributed computing.
Compiler engineering background: LLVM, GCC, domain-specific language design, program analysis, or IR transformations and optimization passes.
Deep technical foundation in CPU/GPU architectures, numeric libraries, modular software design, or runtime systems.
Experience with large software projects, performance profiling, and demonstrated track record of rapid learning.
Expertise with distributed parallelism techniques, tensor operations, auto-tuning, or performance modeling.
You will also be eligible for equity and benefits.
Automate your job search with Sonara.
Submit 10x as many applications with less effort than one manual application.
