S logo

DevOps Engineering Lead - ML Infrastructure

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Overview

Schedule
Full-time
Career level
Senior-level
Remote
On-site
Benefits
Career Development

Job Description

About usWhile others focus on scaling data-hungry neural networks, we’re building AI that understands the structures of thought, not just patterns in data.

Symbolica is an AI research lab pioneering the application of category and type theory to enable logical reasoning in machines. We’re a well-resourced, nimble team of experts on a mission to bridge the gap between theoretical mathematics and cutting-edge technologies, creating symbolic reasoning models that think like humans – precise, logical, and interpretable.

Our approach combines rigorous research with fast-paced, results-driven execution. We’re reimagining the very foundations of intelligence while simultaneously developing product-focused machine learning models in a tight feedback loop, where research fuels application.

Founded in 2022, we’ve raised over $30M from leading Silicon Valley investors, including Khosla Ventures, General Catalyst, and Abstract Ventures, to push the boundaries of applying formal mathematics and logic to machine learning.

Our vision is to create AI systems that transform industries, empowering machines to solve humanity’s most complex challenges with precision and insight. Join us to redefine the future of AI by turning groundbreaking ideas into reality.

About the Role

As a DevOps Engineering Lead working closely with our Head of ML Engineering, you will lead the design, build, and optimize the infrastructure and tools that enable us to take our research and development efforts from the lab into a highly reliable, performant and secure software stack in production. You'll help accelerate the processes involved in going from research prototypes into production and enterprise ready platforms with security, availability and reliability in mind.

Your work will be at the intersection of research and engineering, ensuring our R&D team has the robust platform they need to push the boundaries of AI, working with our GPU vendors, cloud providers, and on-prem servers.

📍 This is an onsite role that is based in our SF office (345 California St.)

Key Responsibilities

- Focus on improving the reliability and performance of our Lambda cluster and model training pipeline.- Assist in managing multiple Kubernetes environments across cloud providers- Maintain and build the internal observability platform across all environments, covering everything from GPUs, AI applications and distributed backend systems.- Take ownership of our model training and deployment systems, bringing them to a more scalable, production-ready state.- Aid in building comprehensive CI tests for GitOps repositories and promotion systems- Build and maintain different environments for research and client facing products according to best practices

About You

- 5+ years of experience in DevOps, or infrastructure roles, with at least 2 years in machine learning infrastructure or MLOps. It would be a benefit if you have either built, maintained, or managed ML infrastructure using DevOps practices in the past.- Proficient in cloud-native architectures, with the ability to make the right tradeoffs where necessary- Experienced with Linux, containers, GPU management, Nix, Kubernetes and an interest in making sure the infrastructure behind our models is secure by design.- Exceptional problem-solving skills with the ability to nimbly solve edge-cases with minimum disruption.- Solid software engineering skills in Rust, Golang or Python

What We Offer

  • Competitive salary and early-stage equity package.
  • A high-trust, execution-first culture with minimal bureaucracy.
  • Direct ownership of meaningful projects with real business impact.
  • A rare opportunity to sit at the interface between deep research and real-world productization.

Read more about Symbolica:

  • https://fortune.com/2024/04/09/vinod-khosla-former-tesla-autopilot-engineer-ai-models/
  • https://venturebeat.com/ai/move-over-deep-learning-symbolicas-structured-approach-could-transform-ai/

Symbolica is an equal opportunities employer. We celebrate diversity and are committed to creating an inclusive environment for all employees, regardless of race, gender, age, religion, disability, or sexual orientation.

Symbolica is an equal opportunities employer. We celebrate diversity and are committed to creating an inclusive environment for all employees, regardless of race, gender, age, religion, disability, or sexual orientation.

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall

FAQs About DevOps Engineering Lead - ML Infrastructure Jobs at Symbolica AI

What is the work location for this position at Symbolica AI?
This job at Symbolica AI is located in San Francisco, California, according to the details provided by the employer. Some roles may also include multiple work locations depending on the requirement.
What pay range can candidates expect for this role at Symbolica AI?
Employer has not shared pay details for this role.
What employment applies to this position at Symbolica AI?
Symbolica AI lists this role as a Full-time position.
What experience level is required for this role at Symbolica AI?
Symbolica AI is looking for a candidate with "Senior-level" experience level.
What is the process to apply for this position at Symbolica AI?
You can apply for this role at Symbolica AI either through Sonara's automated application system, which helps you submit applications 10X faster with minimal effort, or by applying manually using the direct link on the job page.