landing_page-logo
Sierra AI logo

Software Engineer, Product

Sierra AISan Francisco, CA
Apply

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

About us

  • At Sierra, we're building a platform to enable every company in the world to build their own autonomous AI agents for everything from customer service to commerce. We are primarily an in-person company based in San Francisco, with growing offices in Atlanta, New York, and London.

  • We are guided by a set of values that are at the core of our actions and define our culture: Trust, Customer Obsession, Craftsmanship, Intensity, and Family. These values are the foundation of our work, and we are committed to upholding them in everything we do.

  • Our co-founders are Bret Taylor and Clay Bavor. Bret currently serves as Board Chair of OpenAI. Previously, he was co-CEO of Salesforce (which had acquired the company he founded, Quip) and CTO of Facebook. Bret was also one of Google's earliest product managers and co-creator of Google Maps. Before founding Sierra, Clay spent 18 years at Google, where he most recently led Google Labs. Earlier, he started and led Google's AR/VR effort, Project Starline, and Google Lens. Before that, Clay led the product and design teams for Google Workspace.

What you'll do

Sierra's engineering team has ~40 mostly senior engineers, including Mihai, Julie, Arya, and Wei. We work in small, autonomous teams oriented around customer problems. Here are some examples of what you'll work on:

  • Agent Architecture: What primitives do we need to build agents that are steerable and verifiable, but also conversational and empathetic? How do we future-proof this as LLMs evolve?

  • Retrieval: How do we ground answers in a customer's knowledge base? How do we use retrieved context conversationally, handling cases where the answer is unclear or needs clarification from the user?

  • Evals: How do we measure an agent's quality? How do empower our customers to improve agent quality?

  • Voice: How do we make our chat agents "just work" over the phone? How do we deliver lifelike conversations at low latency?

  • Simulation & Benchmarking: How can we craft a simulation platform to test AI agents against every real-world scenario imaginable? (See