Member of Technical Staff — Design Engineering
Automate your job search with Sonara.
Submit 10x as many applications with less effort than one manual application.1
Reclaim your time by letting our AI handle the grunt work of job searching.
We continuously scan millions of openings to find your top matches.

Job Description
TensorZero enables a data and learning flywheel for optimizing LLM applications: a feedback loop that turns production metrics and human feedback into smarter, faster, and cheaper models and agents.
Today, we provide an open-source stack for building industrial-grade LLM applications that unifies an LLM gateway, observability, optimization, evaluation, and experimentation. You can take what you need, adopt incrementally, and complement with other tools. Over time, these components enable you to set up a principled feedback loop for your LLM application. The data you collect is tied to your KPIs, ports across model providers, and compounds into a competitive advantage for your business.
Our vision is to automate much of LLM engineering. We're laying the foundation for that with open-source TensorZero. For example, with our data model and end-to-end workflow, we will be able to proactively suggest new variants (e.g. a new fine-tuned model), backtest it on historical data (e.g. using diverse techniques from reinforcement learning), enable a gradual, live A/B test, and repeat the process. With a tool like this, engineers can focus on higher-level workflows — deciding what data goes in and out of these models, how to measure success, which behaviors to incentivize and disincentivize, and so on — and leave the low-level implementation details to an automated system. This is the future we see for LLM engineering as a discipline.
For more details, see:
GitHub Repository
Announcement: TensorZero Raises $7.3M Seed Round to Build an Open-Source Stack for Industrial-Grade LLM Applications
Case Study: Automating Code Changelogs at a Large Bank with LLMs
Essay: Think of LLM Applications as POMDPs — Not Agents
VentureBeat: TensorZero nabs $7.3M seed to solve the messy world of enterprise LLM development
Role
We are looking for a Founding Member of Technical Staff with a background in design engineering. The vast majority of your work will be open source. You’ll have an opportunity to continue to master your current skills with the flexibility to learn new ones from scratch.
As a preview, if you joined today, you'd take on our open-source UI that helps engineers manage the entire TensorZero operation — think of it like the AWS Console for TensorZero. The UI streamlines workflows for observability, optimization (e.g. fine-tuning), evaluations, and more.
Team & Culture
We’re a small, deeply technical team based in NYC (in person). As an early contributor, you’ll work closely with us and have a significant impact on the project’s future and vision.
Viraj Mehta (Co-Founder & CTO) is an ML researcher with deep expertise in reinforcement learning, generative modeling, and LLMs. He received a PhD from CMU with an emphasis on data-efficient RL for nuclear fusion and LLMs, and previously worked in machine learning at KKR and a fintech startup. He holds a BS in math and an MS in computer science from Stanford.
Gabriel Bianconi (Co-Founder & CEO) was the chief product officer at Ondo Finance ($20B+ valuation) and previously spent years consulting on machine learning for companies ranging from early-stage tech startups to some of the largest financial firms. He holds BS and MS degrees in computer science from Stanford.
Aaron Hill (MTS) is a back-end engineer with deep expertise in Rust. He became one of the maintainers of the Rust compiler… while still in college. Later, he worked on back-end infrastructure at AWS and Svix. He’s also an active contributor to many notable open-source Rust projects (e.g. Ruffle).
Andrew Jesson (MTS) is an ML researcher with deep expertise in Bayesian ML, causal inference, RL, and LLMs. He recently completed a postdoc at Columbia and previously received a PhD from Oxford, during which he interned at Meta. He has 3.3k+ citations and several first-author papers at NeurIPS and other top ML venues.
Alan Mishler (incoming MTS) is an ML researcher with a background in causal inference, sequential decision making, uncertainty quantification, and algorithmic fairness (1.2k+ citations). Previously, he was an AI Research Lead at JPMorgan AI Research and received a PhD in Statistics from CMU, during which he interned at Google and Box.
Shuyang Li (incoming MTS) previously was a staff software engineer at Google focused on next-generation search infrastructure, LLM-based search, and many other specialized search products (local, travel, shopping, maps, enterprise, etc.). Before that, he worked on ML/analytics products at Palantir and graduated summa cum laude from Notre Dame.
_____ You?
What We Offer
Competitive compensation — We believe that great talent deserves great compensation (salary, equity, benefits), even at an early-stage startup.
Open-source contributions — The vast majority of your work will be open-source and public.
Learning and growth opportunities — You’ll join with a background in design engineering but will have the opportunity (& be encouraged) to expand your skill set way beyond that (curious about ML?).
Small, technical, in-person team — You’ll work alongside a 100% technical team and help shape our vision, culture, and engineering practices.
Best-in-class investors — We’re lucky to be backed by leading funds like FirstMark (backed ClickHouse), Bessemer (backed Anthropic), Bedrock (backed OpenAI), and many angels. We have years of runway and a long-term mindset.
We’re Looking For
Strong design background — You’ve tackled hard design problems. You’re comfortable driving large projects from inception to deployment, from Figma to React.
Not afraid to code — You're excited to design in Figma, create prototypes, and ship PRs touching the UI.
Passionate about your craft & design — You're excited about the idea of re-thinking developer tooling from first principles to build interfaces and workflows that don't just work but also delight.
Hungry for personal growth — There are no speed limits at TensorZero. You’re excited about learning and contributing across the stack.
In-person in NYC — We work in-person five days a week in NYC. We work hard and obsess about the craft – but maintain and encourage a healthy lifestyle with a long-term mindset.
You can find us on Github: https://github.com/tensorzero/tensorzero
Automate your job search with Sonara.
Submit 10x as many applications with less effort than one manual application.
