T logo

Research Engineer, Infrastructure, Kernels

Thinking Machines LabSan Francisco, California

$350,000 - $475,000 / year

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Overview

Schedule
Full-time
Career level
Senior-level
Remote
On-site
Compensation
$350,000-$475,000/year
Benefits
Health Insurance
Dental Insurance
Vision Insurance

Job Description

Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. 

We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

About the Role

We’re looking for an infrastructure research engineer to design, optimize, and maintain the compute foundations that power large-scale language model training. You will develop high-performance ML kernels (e.g., CUDA, CuTe, Triton), enable efficient low-precision arithmetic, and improve the distributed compute stack that makes training large models possible.

This role is perfect for an engineer who enjoys working close to the metal and across the research boundary. You’ll collaborate with researchers and systems architects to bridge algorithmic design with hardware efficiency. You’ll prototype new kernel implementations, profile performance across hardware generations, and help define the numerical and parallelism strategies that determine how we scale next-generation AI systems.

Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.

What You’ll Do

  • Design and implement custom ML kernels (e.g., CUDA, CuTe, Triton) for core LLM operations such as attention, matrix multiplication, gating, and normalization, optimized for modern GPU and accelerator architectures.
  • Design and think through compute primitives to reduce memory bandwidth bottlenecks and improve kernel compute efficiency.
  • Collaborate with research teams to align kernel-level optimizations with model architecture and algorithmic goals.
  • Develop and maintain a library of reusable kernels and performance benchmarks that serve as the foundation for internal model training.
  • Contribute to infrastructure stability and scalability, ensuring reproducibility, consistency across precision formats, and high utilization of compute resources.
  • Document and share insights through internal talks, technical papers, or open-source contributions to strengthen the broader ML systems community.

Skills and Qualifications

Minimum qualifications:

  • Bachelor’s degree or equivalent experience in computer science, electrical engineering, statistics, machine learning, physics, robotics, or similar.
  • Strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases
  • Understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.
  • Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.
  • A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.
  • Proficiency in CUDA, CuTe, Triton, or other GPU programming frameworks.
  • Demonstrated ability to analyze, profile, and optimize compute-intensive workloads.

Preferred qualifications — we encourage you to apply if you meet some but not all of these:

  • Experience training or supporting large-scale language models with tens of billions of parameters or more.
  • Track record of improving research productivity through infrastructure design or process improvements.
  • Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators.
  • Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks.
  • Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM).
  • Contributions to open-source GPU, ML systems, or compiler optimization projects.
  • Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure.

Logistics

  • Location: This role is based in San Francisco, California. 
  • Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.
  • Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.
  • Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.

As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall

FAQs About Research Engineer, Infrastructure, Kernels Jobs at Thinking Machines Lab

What is the work location for this position at Thinking Machines Lab?
This job at Thinking Machines Lab is located in San Francisco, California, according to the details provided by the employer. Some roles may also include multiple work locations depending on the requirement.
What pay range can candidates expect for this role at Thinking Machines Lab?
Candidates can expect a pay range of $350,000 and $475,000 per year.
What employment applies to this position at Thinking Machines Lab?
Thinking Machines Lab lists this role as a Full-time position.
What experience level is required for this role at Thinking Machines Lab?
Thinking Machines Lab is looking for a candidate with "Senior-level" experience level.
What benefits are offered by Thinking Machines Lab for this role?
Thinking Machines Lab offers following benefits: Health Insurance, Dental Insurance, Vision Insurance, Parental and Family Leave, and Flexible/Unlimited PTO for this position. Actual benefits may vary depending on the employer's policies and employment terms.
What is the process to apply for this position at Thinking Machines Lab?
You can apply for this role at Thinking Machines Lab either through Sonara's automated application system, which helps you submit applications 10X faster with minimal effort, or by applying manually using the direct link on the job page.