L logo

Member of Technical Staff - Post Training, Reinforcement Learning

Liquid AISan Francisco, California

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

Work With Us

At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.

We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.

While San Francisco and Boston are preferred, we are open to other locations in the United States.

This Role Is For You If:

  • You want to push the boundaries of small language models capabilities and open-source best-in-class checkpoints

  • You actively follow the latest reinforcement learning and optimization research and strive to put theory into practice

  • You're equally comfortable crafting domain-specific training environments and profiling GPU utilization in multi-turn asynchronous rollouts

Required Experience:

  • Strong Python and PyTorch proficiency, with hands-on experience optimizing training pipelines

  • Hands-on experience with reinforcement learning and the ability to translate optimization techniques from theory into practical implementations

  • Track record of integrating research ideas into robust, maintainable code

  • Experience with frameworks like DeepSpeed, FSDP, or vLLM for efficient model training and inference

  • Experience working with data pipelines, including curation, validation, and analysis to support post-training objectives

  • Contributions to open-source machine learning projects

  • M.S. or Ph.D. in Computer Science, Electrical Engineering, Mathematics, or a related field

What You'll Actually Do:

  • Profile, optimize, and scale RL training runs to reduce iteration time

  • Integrate new optimization techniques as they emerge from the research community

  • Design and implement tools and environments that test the boundaries of model capabilities

  • Turn proof-of-concept ideas into robust training pipelines and best-in-class models

What You'll Gain:

  • The opportunity to work directly on state-of-the-art AI systems at one of the most advanced AI companies in the world

  • A fast-paced, collaborative environment where your work has direct impact on model performance and product capability

  • The satisfaction of knowing your craftsmanship helps define the next frontier in AI

About Liquid AI

Spun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall