DataBricks logo

Staff Software Engineer - Genai Inference

DataBricksSan Francisco, CA

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

P-1285

About This Role

As a staff software engineer for GenAI inference, you will lead the architecture, development, and optimization of the inference engine that powers Databricks Foundation Model API.. You'll bridge research advances and production demands, ensuring high throughput, low latency, and robust scaling. Your work will encompass the full GenAI inference stack: kernels, runtimes, orchestration, memory, and integration with frameworks and orchestration systems.

What You Will Do

  • Own and drive the architecture, design, and implementation of the inference engine, and collaborate on model-serving stack optimized for large-scale LLMs inference
  • Partner closely with researchers to bring new model architectures or features (sparsity, activation compression, mixture-of-experts) into the engine
  • Lead the end-to-end optimization for latency, throughput, memory efficiency, and hardware utilization across GPUs, and accelerators
  • Define and guide standards to build and maintain instrumentation, profiling, and tracing tooling to uncover bottlenecks and guide optimizations
  • Architect scalable routing, batching, scheduling, memory management, and dynamic loading mechanisms for inference workloads
  • Ensure reliability, reproducibility, and fault tolerance in the inference pipelines, including A/B launches, rollback, and model versioning
  • Collaborate cross-functionally on Integrating with federated, distributed inference infrastructure - orchestrate across nodes, balance load, handle communication overhead
  • Drive cross-team collaboration: with platform engineers, cloud infrastructure, and security/compliance teams
  • Represent the team externally through benchmarks, whitepapers, and open-source contributions

What We Look For

  • BS/MS/PhD in Computer Science, or a related field
  • Strong software engineering background (6+ years or equivalent) in performance-critical systems
  • Proven track record of owning complex system components and driving architectural decisions end-to-end
  • Deep understanding of ML inference internals: attention, MLPs, recurrent modules, quantization, sparse operations, etc.
  • Hands-on experience with CUDA, GPU programming, and key libraries (cuBLAS, cuDNN, NCCL, etc.)
  • Strong background in distributed systems design, including RPC frameworks, queuing, RPC batching, sharding, memory partitioning
  • Demonstrated ability to uncover and solve performance bottlenecks across layers (kernel, memory, networking, scheduler)
  • Experience building instrumentation, tracing, and profiling tools for ML models
  • Ability to lead through influence - work closely with ML researchers, translate novel model ideas into production systems
  • Excellent communication and leadership skills, with a proactive and ownership-driven mindset
  • Bonus: published research or open-source contributions in ML systems, inference optimization, or model serving

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall