1. Home
  2. »All Job Categories
  3. »Devops Jobs

Auto-apply to these devops jobs

We've scanned millions of jobs. Simply select your favorites, and we can fill out the applications for you.

Zuma logo
ZumaSan Francisco Bay Area, California
About Zuma Zuma is pioneering the future of agentic AI and our focus is to transform the rental market experience for consumers and property manager alike. Our innovative platform is engineered from the ground up to boost operations efficiency and enhance support capabilities for property management business across the US and Canada, a ~$200B market. Off the back of our Series-A in early 2024, Zuma is scaling rapidly. Achieving our vision requires a team of passionate, innovative individuals eager to leverage technology to redefine customer-business interactions. We're on the hunt for exceptional talent ready to join our mission and contribute to building a groundbreaking technology that reshapes how businesses engage with customers. Zuma has raised over $17M in funding to date and has support from world-renowned investors, including Andreessen Horowitz (a16z), Y Combinator, King River, Range Ventures, and distinguished angel investors like YC’s former COO, Qasar Younis. As a Staff Engineer, you will: Help define how humans collaborate with intelligent systems in one of the largest and most underserved industries in the world: property management. You’ll shape the technical foundation of a platform that is not just supporting human workflows, but executing them autonomously through AI agents. This is a rare opportunity to influence how an entire industry evolves, building tools that transform repetitive operational tasks into seamless, intelligent experiences. Your work will directly contribute to how trust is built between humans and machines, how operations scale without added headcount, and how residents and staff experience a new, AI-powered standard of service. We’re not just building software we’re designing AI that people want to work with. Delightful, trustworthy, and deeply effective. Join us to help lead the AI revolution in multifamily, drive meaningful real-world impact, and be part of reimagining what work can feel like when done side-by-side with intelligent agents. You will be a cornerstone of our engineering organization, reporting to the VPE. This is a pivotal role where you'll lead critical system rewrites, architect scalable foundations for our AI platform, and establish the technical standards that will shape our engineering culture for years to come. You'll work at the intersection of cutting-edge LLM technology and practical business applications, creating sophisticated systems that power our AI leasing agent while building self-serve experiences that enable rapid customer onboarding. As our first US-based engineer, you'll bridge the gap between our product vision and technical implementation. This role offers a rare opportunity to directly influence how we architect the next generation of our platform. You'll tackle projects like rebuilding our onboarding/configuration system to be self-serve, creating robust analytics infrastructure to measure AI performance, and reimagining our integration framework to connect seamlessly with customer systems. Your work will significantly reduce manual engineering overhead while enabling rapid scaling of our customer base. We're looking for a Staff Engineer to help us bring that future to life. Why This Could Be Your Dream Role You'll work directly with cutting-edge LLM technology in a real-world application You want to work at a company where customers feel your impact every day You'll architect AI-powered systems that are transforming the real estate industry You'll have autonomy to design and implement innovative technical solutions Your work will directly impact thousands of apartment communities and millions of renters You'll receive significant equity in a venture-backed company with strong traction As we scale, your role and influence will grow with the company Why You Might Want to Think Twice This is a demanding role that will often require extended hours and deep commitment As a founding team member, you'll need to wear multiple hats and step outside your comfort zone You'll need to make thoughtful tradeoffs between innovation and immediate needs You'll interact directly with customers to understand their needs and occasionally travel to their offices We're a startup - priorities can shift rapidly as we respond to market opportunities and customer needs If you're not comfortable getting your hands dirty with legacy code or speaking directly with customers, this isn't the job for you Responsibilities Lead critical system rewrites to transform our architecture into a highly scalable, resilient foundation Own the design and performance optimization of our data storage systems , ensuring they scale with customer and AI demands Build and evolve our deployment pipelines , enabling reliable, automated releases for AI-first products Set up and manage modern cloud infrastructure from scratch , leveraging Infrastructure as Code (IaC) to ensure consistency, security, and scalability Establish engineering best practices, including observability, incident response processes, and system hardening for an AI-first platform Drive robust analytics and monitoring to track performance, reliability, and the effectiveness of our AI solutions Mentor engineers and elevate the team's capabilities across infrastructure, scalability, and AI product development Your Experience Looks Like Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field 5+ years of experience building production-grade software systems , with a focus on scalability, performance, and reliability Proven expertise in backend development with Node.js , including API design, system architecture, and cloud-based services Experience with cloud infrastructure (AWS, GCP, or similar) and deploying production systems using Infrastructure as Code (e.g., Terraform, Pulumi) Hands-on experience with database design, performance tuning, and scaling high-throughput data systems Familiarity with building and maintaining CI/CD pipelines , automated testing, and modern DevOps practices Strong communication skills and ability to work effectively in a distributed, fast-paced environment Comfortable operating in early-stage, high-ownership environments with evolving requirements Bonus: Experience with React and TypeScript on the frontend, though this role leans backend/infrastructure Bonus: Exposure to LLM-based systems, AI infrastructure, or agentic AI workflows Guiding Principles Customer‑First Outcomes Every commit should trace back to resident or operator value. Whether it’s a new feature, infra investment, or AI capability, if it doesn’t solve a real problem, it doesn’t ship. Bias for Simplicity We favor composable primitives over clever abstractions. Open standards, clean APIs, and clear contracts win over custom complexity, even if the custom version is cooler. Quality Is a Gate, Not an After‑Thought Quality is built-in from day one. Our definition of done includes: test coverage, performance checks, basic observability, and internal docs. Shipping fast doesn’t mean skipping craftsmanship. Data‑Driven Choices We use data to guide, not paralyze, our decision-making. We track leading indicators (cycle time, defect rate, NPS) and lagging signals (retention, revenue impact). We keep instrumentation lightweight but meaningful signal over spreadsheet. Transparency & Written Culture Good ideas don’t expire in Zoom. We operate in public inside the company, TDDs, PR reviews, and Linear tickets tell the story. This keeps us async-friendly, auditable, and aligned across time zones and functions. Other Benefits Great health insurance, dental, and vision. Gym and workspace stipends. Computer and workspace enhancements. Unlimited PTO. Company off-sites with the team. Opportunity to play a critical role in building the foundations of the company and Engineering culture. We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

Posted 30+ days ago

Booz Allen Hamilton logo
Booz Allen HamiltonUsa, New York

$61,900 - $141,000 / year

DevOps Platform Engineer The Opportunity: Today’s dynamic technology landscape demands constant and rapid innovation. To facilitate this transformation, we must ensure continuous integration and application development. That’s why we need you, an experienced DevOps engineer who’s eager to design, test, and program critical applications for our clients who need them most. As a sof tware factory DevOps Platform Engineer on our team, you’ll support our sof tware development from requirements to testing in production. You’ll incorporate automation and cloud resources to minimize repetitive tasks and free up the team’s developers to do what they do best—innovate. You’ll work together with fellow professionals to implement continuous integration and delivery to limit manual testing and troubleshooting. This is an opportunity to broaden your experience in sof tware engineering while helping to develop sof tware that will completely transform workflows and make a real impact. Here, we invest in technology—and we’ll invest in you. With access to continuing education resources, tuition assistance opportunities, and tech development programs, you’ll keep your skills sharp as you work at the leading edge of tech. Work with us as we build and test tools to transform the future. Join us. The world can’t wait. You Have: 4+ years of experience with maintaining and deploying production-grade containerized applications through DevOps practices Experience with IaC, including Terraform or Ansible Experience with Cloud, including AWS or Azure Experience with CI/CD and developer workflow automation, including GitHub Actions, GitLab CI, GitBash, or AWS CodeStar and CodePipeline Experience with Containerization, including Docker Experience with Cloud and Network Security architecture, including least privilege in IAM, secrets management, Role-Based Access Control (RBAC), or Boundary Protection Experience in troubleshooting platform application deployments and configurations Secret clearance Bachelor's degree in Computer Science, Engineering, or Mathematics Nice If You Have: Experience with concepts of Service Meshes such as Istio or AWS App Mesh Experience with Prometheus, Grafana, or Keycloak Experience with Cloud Native servers, Kubernetes platforms, and container registries Experience with hardened AMIs and container images, including DoD STIGs or CIS Benchmarks Experience with APM, including Datadog, New Relic, or Splunk Experience in Rancher services, including RKE2 Experience in Identity Authentication and Authorization, including Single-Sign-On, SAML, or OpenID Experience in Active Directory and GPO configuration and management Secret clearance Clearance: Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information; Secret clearance is required. Compensation At Booz Allen, we celebrate your contributions, provide you with opportunities and choices, and support your total well-being. Our offerings include health, life, disability, financial, and retirement benefits, as well as paid leave, professional development, tuition assistance, work-life programs, and dependent care. Our recognition awards program acknowledges employees for exceptional performance and superior demonstration of our values. Full-time and part-time employees working at least 20 hours a week on a regular basis are eligible to participate in Booz Allen’s benefit programs. Individuals that do not meet the threshold are only eligible for select offerings, not inclusive of health benefits. We encourage you to learn more about our total benefits by visiting the Resource page on our Careers site and reviewing Our Employee Benefits page. Salary at Booz Allen is determined by various factors, including but not limited to location, the individual’s particular combination of education, knowledge, skills, competencies, and experience, as well as contract-specific affordability and organizational requirements. The projected compensation range for this position is $61,900.00 to $141,000.00 (annualized USD). The estimate displayed represents the typical salary range for this position and is just one component of Booz Allen’s total compensation package for employees. This posting will close within 90 days from the Posting Date. Identity Statement As part of the application process, you are expected to be on camera during interviews and assessments. We reserve the right to take your picture to verify your identity and prevent fraud. Work Model Our people-first culture prioritizes the benefits of flexibility and collaboration, whether that happens in person or remotely. If this position is listed as remote or hybrid, you’ll periodically work from a Booz Allen or client site facility. If this position is listed as onsite, you’ll work with colleagues and clients in person, as needed for the specific role. Commitment to Non-Discrimination All qualified applicants will receive consideration for employment without regard to disability, status as a protected veteran or any other status protected by applicable federal, state, local, or international law.

Posted 5 days ago

Avalore logo
AvaloreAnnapolis Junction, Maryland
Description Supports a team of developers implementing multiple workflow products in a customer portfolio. Collaborates with system, software, and UI/UX engineers to design, develop, deploy, operate, and manage workflows hosted on a large scale, enterprise application built on the Pega Platform. Collaborates with system, DevOps, and UI/UX engineers to ensure workflows are security compliant, accessibility compliant, and meet minimum performance requirements. Requirements Bachelor’s degree in Computer Science or related technical degree from an accredited college or university. Four (4) years of additional experience may be substituted for a Bachelor’s degree; At least nine (9) years or software development experience MUST have experience with Pega development Clearance: Active TS/SCI with an appropriate current polygraph is required to be considered for this role; Ability to receive privileged access rights. Benefits Eligibility requirements apply. Employer-Paid Health Care Plan (Medical, Dental & Vision) Retirement Plan (401k, IRA) with a generous matching program Life Insurance (Basic, Voluntary & AD&D) Paid Time Off (Vacation, Sick & Public Holidays) Short Term & Long Term Disability Training & Development Employee Assistance Program

Posted 30+ days ago

Micron logo
MicronBoise, Idaho
Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Micron’s test engineering organization designs, develops, and delivers tester hardware and software to enable Micron’s industry leading memory and storage product portfolio! Software design and development within test engineering includes embedded software design, applications software, site reliability support, and test program development. As a DevOps architect you will be responsible for designing, implementing, and leading all aspects of a DevOps framework within test engineering. By embracing industry-standard, modern toolchains, processes, and software development technology we can empower a community of developers and enhance focus on test program content and outcomes for Micron’s technology and products. What’s Encouraged Daily: Advise software development strategy and roadmap for test engineering Ownership of software project approval process Drive organizational change initiatives around software toolchains, processes, and DevOps framework Approximately 50% of time will be on roadmap definition, execution, and project planning; 50% on individual contributor activities including technical pathfinding Design/Develop/Administer solutions with Docker / Kubernetes. Linux Administration using automation tools such as Ansible. Develop & Debug in Python, Go, C++ as well as other languages. Identify reliability and resiliency issues in sophisticated systems and work with developer teams to resolve them. Develop relationships with other engineering teams to define developer and system infrastructure roadmaps. Continuously look for opportunities for both personal, team, and platform improvement. How To Qualify: 10 years of experience in software development and/or software product management Proven track record of driving large organizational objectives Familiarity with debuggers, compilers, source control, and networking architectures. Familiarity with Cloud Services (Azure, AWS, GCP ) Familiarity with Docker containers and Kubernetes clusters Familiarity with high availability concepts and architectures Passion for automation, toil reduction, and engineer enablement Highly self-motivated and directed Experience with Jenkins or another CI/CD tool Experience with Atlassian suite of tools (Jira, Confluence, Bitbucket) strongly preferred Experience with SIG tester platforms strongly preferred Prior people leadership strongly preferred BS or MS in Computer Science, Information Technology, Computer Engineering or equivalent experience required As a world leader in the semiconductor industry, Micron is dedicated to your personal wellbeing and professional growth. Micron benefits are designed to help you stay well, provide peace of mind and help you prepare for the future. We offer a choice of medical, dental and vision plans in all locations enabling team members to select the plans that best meet their family healthcare needs and budget. Micron also provides benefit programs that help protect your income if you are unable to work due to illness or injury, and paid family leave. Additionally, Micron benefits include a robust paid time-off program and paid holidays. For additional information regarding the Benefit programs available, please see the Benefits Guide posted on micron.com/careers/benefits . Micron is proud to be an equal opportunity workplace and is an affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, age, national origin, citizenship status, disability, protected veteran status, gender identity or any other factor protected by applicable federal, state, or local laws. To learn about your right to work click here. To learn more about Micron, please visit micron.com/careers For US Sites Only: To request assistance with the application process and/or for reasonable accommodations, please contact Micron’s People Organization at hrsupport_na@micron.com or 1-800-336-8918 (select option #3) Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.

Posted 1 day ago

Mach Industries logo
Mach IndustriesHuntington Beach, California
About Mach Industries Founded in 2022, Mach Industries is a rapidly growing defense technology company focused on developing next-generation autonomous defense platforms . At the core of our mission is the commitment to delivering scalable, decentralized defense systems that enhance the strategic capabilities of the United States and its allies. With a workforce of approximately 180 employees , we operate with startup agility and ambition. Our vision is to redefine the future of warfare through cutting-edge manufacturing, innovation at speed, and unwavering focus on national security. We are dedicated to solving the next generation of warfare with lethal systems that deter kinetic conflict and protect global security. The Role We’re hiring our first DevOps Engineer to build and own the infrastructure layer that powers everything we do — from developer velocity to production uptime. You’ll design, implement, and maintain the systems and tooling that keep our software lifecycle secure, scalable, and fast. This is a foundational hire: you’ll be given wide technical latitude and trusted to help shape our DevOps culture, tooling, and long-term approach to reliability and automation. We're looking for someone who loves enabling engineering teams through thoughtful infrastructure design, has strong opinions about CI/CD and observability best practices, and thrives in a high-trust, fast-paced environment. Key Responsibilities Build and maintain Github CI/CD pipelines to accelerate developer velocity and minimize deployment risk. Architect and manage cloud infrastructure (AWS) for high availability, scalability, and security. Monitor and improve system observability: logging, metrics, alerting, on-call response. Collaborate with software and hardware teams to support test infrastructure, simulation environments, and production deployments. Lead the implementation of best practices in infrastructure security and cost optimization. Establish incident response protocols and support root cause analysis. Help shape DevOps processes, tools, and team culture as we scale. Required Qualifications 3+ years of experience in a DevOps, Infrastructure, or SRE role. Proficiency with cloud platforms like AWS. Experience building and maintaining CI/CD pipelines using tools like GitHub Actions, GitLab CI, Jenkins, etc. Strong background in scripting (e.g. Python, Bash). Solid understanding of containerization (Docker) and orchestration (e.g. Kubernetes or ECS). Familiarity with system observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Excellent debugging and incident management skills. Comfortable owning large initiatives end-to-end with minimal oversight. Preferred Qualifications Experience supporting production systems in a defense, robotics, or hardware-adjacent environment. Knowledge of airgapped or secure deployment environments. Familiarity with compliance standards (e.g. FedRAMP, NIST 800-171, or CMMC). Exposure to embedded systems build pipelines or cross-compilation toolchains. Passion for automation, infrastructure design, and enabling engineering teams. Experience in infrastructure-as-code (e.g. Terraform, CloudFormation). Disclosures This position may require access to information protected under U.S. export control laws and regulations, including the Export Administration Regulations (EAR) and the International Traffic in Arms Regulations (ITAR). Please note that any offer for employment may be conditioned on authorization to receive software or technology controlled under these U.S. export control laws and regulations without sponsorship for an export license. Mach participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S. The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offers may vary based on (but not limited to) work experience, education and training, critical skills, and business considerations. Highly competitive equity grants are included in most offers and are considered part of Mach’s total compensation package. Mach offers benefits such as health insurance, retirement plans, and opportunities for professional development. Mach is an equal opportunity employer committed to creating a diverse and inclusive workplace. All qualified applicants will be treated with respect and receive equal consideration for employment without regard to race, color, creed, religion, sex, gender identity, sexual orientation, national origin, disability, uniform service, Veteran status, age, or any other protected characteristic per federal, state, or local law, including those with a criminal history, in a manner consistent with the requirements of applicable state and local laws. If you’d like to defend the American way of life, please reach out!

Posted 30+ days ago

G logo
Gabb WirelessLehi, Utah
About the Role: We are seeking a highly skilled AWS DevOps Engineer to design, build, and secure cloud infrastructure that powers Gabb’s platform and device ecosystem. This role combines hands-on (≈70%) with architectural guidance and cross-team collaboration (≈30%). You’ll ensure our environments are scalable, observable, cost-efficient, and secure. Key Responsibilities-What You'll Do Cloud Infrastructure Development – Build and manage AWS infrastructure using Terraform/CDK. Automate provisioning, scaling, and monitoring for dev, staging, and production. Security & Compliance – Implement and automate security controls, vulnerability scanning, secrets management, and policy-as-code. Identity & Access Management – Manage IAM roles, policies, and encryption with KMS. Integrate with AWS Secrets Manager or Vault. Networking & Security Posture – Architect secure VPCs, subnets, and routing; implement WAF, GuardDuty, and Shield. Automation & Tooling – Write reusable automation in Python, Go, or Bash. Enforce IaC linting and drift detection. Incident Management – Participate in on-call rotations, resolve production incidents, and lead RCA improvements. Required Qualifications-What You'll Need 6+ years of experience in DevOps, Cloud, or Security Engineering (AWS-focused) Expert-level understanding of AWS services (EC2, ECS, Lambda, RDS, S3, CloudFront, IAM, VPC, KMS, etc.) Strong IaC experience: Terraform, CDK, or Pulumi Proven experience with CI/CD pipelines (GitHub Actions, GitLab CI, CodeBuild, or Jenkins) Skilled with Docker, ECS, or Kubernetes Solid grasp of networking & load balancing (ALB/NLB, Route53, VPC Peering) Experience with security & compliance tooling (OPA, AWS Config, Security Hub) Strong scripting skills in Python, Go, or Bash Experience with monitoring: Datadog, Prometheus, or CloudWatch Preferred Skills Experience with SOC2, NIST, or CIS compliance automation Familiarity with Kubernetes and Helm Prior experience in a security-first consumer tech or IoT company Mindset & Soft Skills Strong ownership and accountability mindset Excellent communication and documentation habits Thrives in a high-autonomy, fast-paced environment Work Expectations • Remote position; collaborate during US Mountain Time hours • Long-term engagement with potential for growth into architecture leadership Why Join Gabb • Shape cloud and security architecture for a mission-driven company impacting millions of families • Work directly with senior engineers and platform leadership • Be part of a team that values technical excellence, autonomy, and security-first design About Gabb Gabb is the leader in safe technology for kids and teens. We build phones, watches, and software that keep families connected while reducing online risks. Our mission is to provide peace of mind to parents and digital independence to kids. Join us in shaping the future of safe tech and building secure, scalable systems that make a difference.

Posted 3 days ago

State Affairs logo
State AffairsMiami, Washington

$120,000 - $180,000 / year

State Affairs is the nation’s leading news and policy intelligence platform focused on state governments. We combine nonpartisan coverage of Statehouses across the country alongside state government data and AI-native tools into a singular platform. We inform and empower decision makers, policy professionals and citizens through our award-winning journalism and data – delivering profound insights to help our customers decode and act on state politics and policy. We’re building a category-defining business that will reshape America as we strengthen visibility into what’s happening and why at the state level. As the Software Engineer, DevOps, you will: Lay the foundation for the platform on which State Affairs will run operate Design and build distributed services (Node and TypeScript, Prisma, Postgres, Mongo) that ingest, enrich, and serve millions of documents Build large-scale distributed systems to handle petabyte content scale Raise expectations for code quality, reliability and product velocity. Collaboratively, you will challenge yourself and peers to develop technically and professionally. Lead by example on code quality via pair‑reviewing, writing generative tests, and mentoring teammates in modern TypeScript patterns Ship secure and compliant code by implementing security concepts to develop software dealing with sensitive data Operate by utilizing AI tools like Cursor, Claude‑Code, Copilot/Codex, or whatever comes next into your daily flow to increase iteration loops. Essential Qualifications for this position include: Bachelor’s degree in computer science, engineering, or related field 5+ years of professional work experience as an DevOps Engineer Professional work experience building production back‑ends in Node/TypeScript or a similar typed language Professional work experience shipping secure, audited systems such as SOC‑2, FedRAMP, or ISO 27001 Knowledge of at least one SQL store, such as Postgres and a schema‑first mindset Ability to automate operational tasks such as deployment, testing, and monitoring (Ansible, Chef, Puppet, and K8s) Preferred Qualifications for this position include: Ability to build data pipelines (Kafka, Kinesis, or equivalent) and event‑driven architectures. Ability to program using coding workflows by leveraging AI tools such as Cursor, Windsurf and/or Claude Code Professional work experience with document AI, search indexes (Elastic/Lucene), or knowledge graphs (e.g., Neo4j) Knowledge of Terraform Prior professional work experience in a start-up organization This is an onsite work opportunity and our teams operate from the Washington, DC office (located at L and 15th St. NW). State Affairs offers a competitive salary and a comprehensive benefits package to employees. The annual salary range for this role as it is posted is $120,000 - $180,000 for candidates working from the State Affairs office in Washington, DC. The final job level and annual salary will be determined based on the education, qualification, knowledge, skills, ability, and experience of the final candidate(s), and calibrated against relevant market data and internal team equity. Benefits listed in this posting may vary depending on the nature of your employment with State Affairs. Candidates must be authorized to work in the United States without the need for current or future company sponsorship. State Affairs is an equal opportunity employer and makes employment decisions on the basis of merit and business needs. State Affairs does not discriminate against applicants on the basis of race, color, religion, sex, sexual orientation, gender, gender identity, national origin, veteran status, disability, or any other protected characteristic in accordance with federal, state, and local law. State Affairs is committed to providing reasonable accommodations for qualified individuals with disabilities as they go through our job application and interview process. If you need assistance or an accommodation due to a disability, you may contact us at jobs@stateaffairs.com By submitting your application, you affirm the content contained therein is true and accurate in all respects. Please note that prior to employment, State Affairs will obtain background checks for employment purposes that may include, where permitted by law, the following: identify verification, prior employment verification, personal and professional references, educational verification, and criminal history. For certain roles, further background checks covering additional information and activities may be initiated. "By clicking "Submit Application" you are consenting to the use and retention of the information you have provided as set forth in the State Affairs Privacy Policy .

Posted 30+ days ago

Superstate logo
SuperstateNew York City, New York
Tired of the inefficiencies and complexities of traditional finance? We are too. At Superstate, we create investment products that benefit from the speed, programmability, and compliance advantages of blockchain tokenization. As a Staff DevOps Engineer at Superstate, you'll be responsible for building and maintaining highly reliable, scalable infrastructure that powers our financial products. You'll architect robust systems that ensure our platforms deliver exceptional performance, availability, and security at scale. You’ll be wearing many hats across general devops, security, observability, and reliability and help lead these initiatives. What you'll do Design and implement scalable, fault-tolerant infrastructure for our web applications and blockchain systems Build and maintain CI/CD pipelines, deployment automation, and infrastructure as code Develop comprehensive monitoring, alerting, and observability systems to ensure system reliabilityLead incident response efforts and conduct post-incident reviews to improve system resilience Implement security best practices across all infrastructure and deployment processes Collaborate with engineering teams to optimize application performance and reliability Establish SLIs, SLOs, and error budgets to measure and improve service reliability Automate operational tasks and reduce manual toil through tooling and process improvements Manage capacity planning and cost optimization for cloud resources What you'll bring to the team Bachelor's degree in Computer Science, Engineering, or related technical field 8+ years of professional software engineering or infrastructure experience Extensive experience with AWS cloud services and infrastructure management Extensive experience with infrastructure as code (Terraform, Pulumi) Strong background in systems engineering, networking, and distributed systems Expertise in containerization and orchestration (ECS, EKS)Proficiency in languages (Java, Go, Rust, or similar) for automation and tooling Deep understanding of security principles and secure infrastructure practices Experience with high-availability systems and disaster recovery planning Exceptional problem-solving skills with a reliability and security focused mindset Bonus points if you have experience with these technologies Microservices architecture and service decomposition strategies Observability tools (Prometheus, Grafana, Sentry, etc.) Database administration and optimization (PostgreSQL, Redis, AWS RDS) AWS security services (GuardDuty, Security Hub, WAF, Shield, Config) Cloudflare optimization and security configurations Github Actions workflows and deployment strategies Compliance automation for financial services (SOC2, ISO, PCI DSS) Chaos engineering/resilience testing and reliability best practices Benefits & Perks Competitive compensation Equity ownership Flexible vacation policy Paid parental leave Full medical, dental, and vision insurance Professional development and security certification budget We are an equal opportunity employer and value diversity at our company. We welcome qualified candidates of all races, creeds, genders, age, veteran status and sexuality to apply. Founded in 2023, Superstate is backed by leading investors including Distributed Global, CoinFund, Breyer Capital, Galaxy, ParaFi, 1kx, and Cumberland. We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

Posted 30+ days ago

R logo
RyanDallas, Texas

$96,000 - $126,060 / year

Why Ryan? Hybrid Work Options Award-Winning Culture Generous Personal Time Off (PTO) Benefits 14-Weeks of 100% Paid Leave for New Parents (Adoption Included) Monthly Gym Membership Reimbursement OR Gym Equipment Reimbursement Benefits Eligibility Effective Day One 401K with Employer Match Tuition Reimbursement After One Year of Service Fertility Assistance Program Four-Week Company-Paid Sabbatical Eligibility After Five Years of Service As a member of the Ryan Application Development Team, this position will be a critical contributor to an ambitious strategic initiative with the goal of re-envisioning a broad suite of enterprise level applications. Aiming to create simple and compelling developer and client experiences, the DevOps Engineer will be required to design, maintain, and manage infrastructure configuration with the goal of achieving a continuous deployable system while maintaining an exceptional level of service delivery for all our clients. We are looking for someone that is experienced in building scalable and efficient cloud infrastructure with built-in monitoring for automated system health checks. Candidates will be required to interface directly with technical leads, software architects, other business groups, and key stakeholders demanding exceptional communication and interpersonal skills.The Ryan Application Development Team promotes an open-minded atmosphere of learning and growth and expects the same from candidates. We want to foster a positive and enthusiastic can-do attitude with our work. Candidates should have a sense of where things are going and have experience using best of breed tools, technologies, and practices. This role is a formative one for the future of application development within Ryan, LLC and will be best filled by candidates hungry to have a huge impact. Duties and responsibilities, as they align to Ryan’s Key Results People: Creates a positive team member experience. Support and assist development teams as needed. Client: Proactively provide updates to CI/CD pipelines and tooling around them. Implement and maintain monitoring and alerting systems Build and maintain production systems ensuring high availability and scalability. Value: Oversee routine maintenance procedures and perform diagnostic tests. Submit recommendations to application development teams for upgrades and enhancements. Contribute to efficiency improvements through process automation Document and maintain system design and functions Performs other duties as assigned. Education and Experience: Bachelor’s in Computer Science, Engineering, Mathematics or equivalent related work experience. Three or more years of practical applications development experience in a DevOps or System Engineer capacity. Computer Skills: To perform this job successfully, an individual must have strong knowledge of Azure, AWS, Azure DevOps, Windows Server, Microsoft SQL Server, SQL Server – Availability Groups, scripting (Python, PowerShell), automation (Terraform, Ansible, CloudFormation, etc.), and containerization technologies such as Docker, Kubernetes. Certificates and Licenses: Valid driver’s license required. Supervisory Responsibilities: This position has no supervisory responsibilities. Work Environment: Standard indoor working environment. Occasional long periods of sitting while working at computer. Position requires regular interaction with employees at all levels of the firm as well as interaction with external vendors and clients as necessary. Independent travel requirement: up to 10%. Compensation: For certain California based roles, the base salary hiring range for this position is $96,000.00 - $126,060.00 For other California based roles, the base salary hiring range for this position is $88,000.00 - $115,610.00 For Colorado based roles, the base salary hiring range for this position is $84,000.00 - $110,330.00 For Illinois based roles, the base salary hiring range for this position is $88,000.00 - $115,610.00 For other Illinois based roles, the base salary hiring range for this position is $84,000.00 - $110,330.00 For New York based roles, the base salary hiring range for this position is $96,000.00 - $126,060.00 For other New York based roles, the base salary hiring range for this position is $80,000.00 - $105,050.00 For Washington based roles, the base salary hiring range for this position is $88,000.00 - $115,610.00 The Company makes offers based on many factors, including qualifications and experience. Certain roles may be eligible for incentive compensation. #DICE Equal Opportunity Employer: disability/veteran

Posted 30+ days ago

Cleerly logo
CleerlyNew York City, New York

$207,000 - $235,000 / year

About Cleerly We’re Cleerly – a healthcare company that’s revolutionizing how heart disease is diagnosed, treated, and tracked. We were founded in 2017 by one of the world’s leading cardiologists and are a growing team of world-class engineering, operations, medical affairs, marketing, and sales leaders. We raised $223M in Series C funding in 2022 which has enabled rapid growth and continued support of our mission. In December 2024 we received an additional $106M in a Series C extension funding. Most of our teams work remotely and have access to our offices in Denver, Colorado, New, York, New York, Dallas, Texas, and Lisbon, Portugal with some roles requiring you to be on-site in a location. Cleerly has created a new standard of care for heart disease through value-based, AI-driven precision diagnostic solutions with the goal of helping prevent heart attacks. Our technology goes beyond traditional measures of heart disease by enabling comprehensive quantification and characterization of atherosclerosis, or plaque buildup, in each of the heart arteries. Cleerly’s solutions are supported by more than a decade of performing some of the world’s largest clinical trials to identify important findings beyond symptoms that increase a person’s risk of heart attacks. At Cleerly, we collaborate digitally and use a wide variety of systems. Our people use Google Workspace (GMail, Drive, Docs, Sheets, Slides), Slack, Confluence/Jira, and Zoom Video, prior experience in these areas is a plus. Role or department specific technology needs may vary and will be listed as requirements in the job description. While we are mostly a remote company, travel is required for some team meetings and cross function projects typically once per month or once per quarter, for some roles like sales or external facing roles travel could be up to 90% of the time. About the Opportunity We’re looking for a Staff Cloud DevOps Engineer to join our growing team and play a key role in advancing our next-generation, AI-powered diagnostic platform. In this role, you’ll be a leader on our DevOps team responsible for designing and evolving the systems that support continuous integration, automated testing, secure infrastructure, and reliable software delivery. As a senior leader of the team, you’ll lead initiatives that enhance build and release pipelines, develop internal tools and services, and ensure the scalability, performance, and security of our infrastructure. You’ll also help define best practices, guide architectural decisions, and mentor more junior DevOps engineers. This is an ideal role for someone who thrives in fast-paced environments, enjoys solving complex problems, and has a passion for building and improving cloud-native systems that accelerate engineering velocity. Responsibilities Lead automation of deployment, configuration, and infrastructure operations across operating systems, databases, networks, and hybrid cloud environments. Own and maintain the Terraform codebase, implementing infrastructure as code best practices and enforcing security and compliance via policy-as-code. Design, implement, and maintain CI/CD pipelines using GitHub Actions to ensure fast, reliable, and secure delivery of infrastructure and application changes. Manage Kubernetes environments (EKS), including cluster provisioning, workload orchestration, scaling, upgrades, and observability. Demonstrate strong expertise in AWS security, encryption, and backup practices, including compliance with frameworks such as SOC 2, HIPAA, and HITRUST. Manage monitoring and log analysis using tools like CloudWatch, CloudTrail, GuardDuty, Datadog, and Sentry. Collaborate with application teams to gather requirements and deliver secure, scalable migration paths using AWS services like CloudFront, ECS, EC2, EKS, ElastiCache, Aurora, DynamoDB, SQS, SNS, Step Functions, and Lambdas. Champion Agile and DevOps transformation by promoting best practices, continuous improvement, and team-level adoption strategies. Contribute to org-wide initiatives focused on improving test infrastructure, environment stability, and automation tooling. Proactively identify security gaps in infrastructure and tooling; implement controls and guardrails using IAM, AWS Config, and security scanning tools. Collaborate closely with engineering, security, and IT teams to ensure infrastructure aligns with product requirements and compliance standards. Design and implement highly available, fault-tolerant systems with strong observability and disaster recovery capabilities. Define and track SLAs/SLOs in partnership with product and engineering teams, leveraging observability tooling to monitor service health. Monitor infrastructure costs and lead optimization efforts across compute, storage, and networking resources. Participate in on-call rotations and lead post-incident reviews, driving long-term reliability improvements across systems. Requirements Bachelor degree in Computer Science, Software Engineering, or related field 12-15 years of experience as a DevOps Engineer in Cloud environments (preferably AWS). 8+ years of experience in server administration (Linux). 8+ years of experience in scripting or development languages (Bash Shell Script, Python, Node.js ). Excellent communication and collaboration skills. Strong problem-solving and debugging capabilities. Detail-oriented, with a focus on delivering high-quality, maintainable code. Ability to work independently and as part of a distributed team. Highly Desired Requirements Amazon AWS Certified Solutions Architect certification Amazon AWS Certified Developer certification Amazon AWS Certified SysOps Administrator certification Amazon AWS DevOps certification TTC*: $207,000 - $235,000 *Total Target Compensation (TTC): Total Cash Compensation (including base pay, variable pay, commission, bonuses, etc.). Each role at Cleerly has a defined salary range based on market data and company stage. We typically hire at the lower to mid-point of the range, with the top end reserved for internal growth and exceptional performance. Actual pay depends on factors like experience, technical depth, geographic location, and alignment with internal peers. #remote Working at Cleerly takes HEART. Discover our Core Values: H: Humility - be a servant leader E: Excellence - deliver world-changing results A: Accountability - do what you say; expect the same from others R: Remarkable - inspire & innovate with impact T: Teamwork - together we win Don’t meet 100 percent of the qualifications? Apply anyway and help us diversify our candidate pool and workforce. We value experience, whether gained formally or informally on the job or through other experiences. Job duties, activities and responsibilities are subject to change by our company. OUR COMPANY IS AN EQUAL OPPORTUNITY EMPLOYER. We do not discriminate on the basis of race, color, national origin, ancestry, citizenship status, protected veteran status, religion, physical or mental disability, marital status, sex, sexual orientation, gender identity or expression, age, or any other basis protected by law, ordinance, or regulation. By submitting your application, you agree to receive SMS messages from Cleerly recruiters throughout the interview process. Message frequency may vary. Message and data rates may apply. You can STOP messaging by sending STOP and get more help by sending HELP. For more information see our Privacy Policy ( https://cleerlyhealth.com/privacy-policy) . All official emails will come from @cleerlyhealth.com email accounts. #Cleerly

Posted 3 weeks ago

Mintlify logo
MintlifySan Francisco, California
Why Mintlify? We're on a mission to empower builders. Massive reach: Our docs platform serves 100 million+ developers every year and powers documentation for 18,000+ companies, including Anthropic, Cursor, PayPal, Coinbase, X, and over 20% of the last YC batch. Small team, huge impact: We’re only 35 people today, backed by $22 million in funding, each new hire shapes the company’s trajectory. Culture of slope over y-intercept : We value learning velocity, grit, and unapologetically unique personalities. We grew in value faster than headcount and we’re looking to align the two quickly. What you'll work on here We're at an inflection point: serving 100M+ developers annually while operating on infrastructure managed by just one expert. As we rapidly scale, we need a second devops engineer to help us build reliable, scalable infrastructure we control. Monitoring and scaling our AWS infrastructure (EKS and ECS) to support our rapidly growing user base Migrating services from third-party vendors to our own AWS/EKS infrastructure to gain control and agency over our platform Building observability and alerting systems to ensure 99.9%+ uptime for our documentation platform Improving deployment pipelines and CI/CD processes to support our fast-moving engineering team Collaborating with backend engineers to architect and implement scalable, reliable infrastructure solutions Working alongside our current devops expert to establish best practices and expand our infrastructure capabilities What you bring to the table You have strong experience with AWS services, particularly EKS (Kubernetes) and ECS You know how to build reliable, observable systems at scale You have experience with infrastructure as code (Terraform, CloudFormation, or similar) You're comfortable with monitoring and observability tools (Prometheus, Grafana, Datadog, or similar) You know how to go from 0-to-1 and aren't afraid to own complex migrations You have strong communication skills and can translate technical decisions to the broader team You thrive in a collaborative team setting Why you should join our engineering team Engineers at Mintlify appreciate a high degree of ownership, are passionate about reliability and performance, and come to work ready to contribute to a small-but-mighty team. You'll have plenty of heads down builder time. We believe in the power of strong teams to drive change - and have created an environment where the best ideas win and we can acknowledge when we're wrong. We're all about finding the intersection between what excites you and business priorities. You'll jump into new territory and learn something new. You'll own projects and features. You'll ship. Company Benefits: Competitive compensation and equity | Free Ubers 20 days paid time off every year | Health, dental, vision 401k or RRSP | Free lunch and dinners $420/mo. wellness stipend | Annual team offsite

Posted 1 week ago

Prosync logo
ProsyncAnnapolis Junction, Maryland
Description ProSync Technology Group, LLC (ProSync) is an award-winning, SDVOSB Defense Contracting company with a strong military heritage and a record of excellence in supporting the Department of Defense and the Intelligence Community. If you have prior military service or government contracting experience, are proud to serve and support our nation, and want to help support ProSync's mission to "Define and Redefine the State of Possible,” please apply today! Please be aware that although this position is listed as SWEIII, it is not centered around software development. This role is entirely focused on DevOps tasks. We are only considering candidates with a strong interest in DevOps responsibilities. The Software Engineer designs, develops, tests, deploys, documents, maintains, and enhances complex and diverse software systems based upon documented requirements. These systems might include, but are not limited to, processing­ intensive analytics, novel algorithm development, manipulation of extremely large data sets, real-time systems, business management information systems, and systems which incorporate data repositories, data transport services, and application and systems development and monitoring. Works individually or as part of a team. Reviews and tests software components for adherence to the design requirements and documents test results. Resolves software problem reports. Utilizes software development and software design methodologies appropriate to the development environment. Provides specific input to the software components of system design to include hardware/software trade-offs, software reuse, use of Open Source Software (OSS) and/or Commercial Off­The-Shelf (COTS) Government Off-The-Shelf (GOTS) software in place of new development, and requirements analysis and synthesis from system level to individual software components. Experience developing in Unix. Ability to perform shell scripting. Working knowledge of Configuration Management (CM) tools and Web Services implementation. The Level 3 Software Engineer (SWE) possess the following capabilities: Analyze user requirements to derive software design and performance requirements. Debug existing software and correct defects. Design and code new software or modify existing software to add new features. Write or review software and system documentation. Integrate existing software into new or modified systems or operating environments. Develop simple data queries for existing or proposed databases or data repositories. Software development using languages such as C, C++, Python, Ruby, Perl, JavaScript, etc. Has experience with agile development processes. Has experience with source code control systems, such as Git. Serve as team lead at the level appropriate to the software development process being used on any particular project. Design and development of relational and non-relational database applications. Use of orchestration frameworks such as Spring and Kafka. Familiarization with queue management systems Develop or implement algorithms to meet or exceed system performance and functional standards. Develop and execute test procedures for software components. Develop software solutions by analyzing system performance standards and conferring with users or system engineers; analyzing systems flow, data usage and work processes; and investigating problem areas. Modify existing software to adapt to new hardware or to improve its performance. Design, develop, and modify software systems using scientific analysis and mathematical models to predict and measure outcomes and consequences of design decisions. Java development using the Eclipse IDE (Integrated Development Environment). Development of Java 2 Enterprise Edition (J2EE) applications Experience using collaboration and software development tools (ie. Atlassian). Software development using continuous integration practices. Experience with container technologies (ie. Docker). Unix shell scripting Development of event driven, or data driven analytics Development of cloud-based solutions and technologies. Design or implement complex algorithms requiring adherence to strict timing, system resource, or interface constraints. Perform quality control on team products. Recommend and implement suggestions for improving documentation and software development process standards. Oversee one or more software development teams and ensure the work is completed in accordance with the constraints of the software development process being used on any particular project. Confer with system engineers and hardware engineers to derive software requirements and to obtain information on project limitations and capabilities, performance requirements, and interfaces. Coordinate software installation on a system and monitor performance to ensure operational specifications are met. Recommend new technologies and processes for complex software projects. Serve as the technical lead of multiple software development teams. Select the software development process in coordination with the customer and system engineering. Ensure quality control of all developed and modified software. Delegate programming and testing responsibilities to one or more teams and monitor their performance. Requirements The DevOps Software Engineer shall be responsible for the Operational and Maintenance (O&M) efforts including installation, configuration, integration, monitoring, and sustaining of a large multi-tenant containerized Kubernetes High Performance Computing as a service (HPCaaS) platform for a large Linux computing environment. A Master’s degree in computer science or related discipline from an accredited college or university, plus five (5) years of experience as a SWE, in programs and contracts of similar scope, type, and complexity OR a Bachelor’s degree in computer science or related discipline from an accredited college or university, plus seven (7) years of experience as a SWE, in programs and contracts of similar scope, type, and complexity OR Nine (9) years of experience as a SWE, in programs and contracts of similar scope, type, and complexity. Experience with Linux CLI. Experience writing scripts using Shell/Bash/Python. Experience developing with Python and Java in a Linux environment. General HPC technical knowledge regarding compute, network, memory, and storage system components. Experience installing, configuring, and supporting COTS/GOTS/FOSS software, libraries, and packages in a Linux environment. Extensive software development experience with Java and Python. Experience with stream/batch Big Data processing and analytic frameworks. Experience with CI/CD principles, methodologies, and tools such as GitLab CI. Experience with IaC (Infrastructure as Code) principles and automation infrastructure provisioning and configuration using tools such as Ansible. Experience with containerization technologies such as Docker. Experience deploying containerized services under Kubernetes orchestration. Demonstrated experience using system monitoring tools such as Prometheus/Grafana. Experience with Git for source code management, branching strategies, and team collaboration. Desired Skills Familiar with Site Reliability Engineering (SRE) principles and applications. Experience with the Atlassian Tool Suite (JIRA, Confluence). Experience using system monitoring tools such as Grafana/Prometheus. Benefits Join PROSYNC and enjoy our great benefits! Compensation: We offer bonuses that are awarded quarterly to our employees and our compensation rates are highly competitive. Health & Retirement: We offer a comprehensive Health Benefits package and 401K Retirement plan so you can take care of yourself and your family, now and in the future. Other health-related benefits include an employee assistance program for those difficult times or when you need to take care of your mental health. Education: Individual growth is a priority at ProSync. Employees are encouraged to take advantage of our company-sponsored continuing education program so you can get your degree or that next certification you need to propel you to the next level. Work/Life Balance: A healthy work/life balance is essential for building and executing your work effectively at ProSync, but it’s also necessary to allow you the room to pursue everything else you want to develop in your personal life.. We offer generous Paid Time Off and 11 paid holidays a year. ProSync also provides flexible work options that work with your schedule and lifestyle.

Posted 1 week ago

Sage Care logo
Sage CarePalo Alto, California
Job Title: ITOps / DevOps Engineer (Contract, Convertible to Full-Time) Location: Hybrid, Palo Alto, CA — Tuesday through Thursday About Us Sage Care is a fast-growing Series A healthcare technology startup founded by leaders from Apple, Uber, and Carbon Health. We recently emerged from stealth with $20 million in funding led by Yosemite, and investors including General Catalyst, Metrodora Ventures (co-founded by Chelsea Clinton), OVTR.VC, SV Angel, Liquid 2 Ventures, Seven Stars, Refract Ventures, AME Cloud Ventures, and Apolo Ohno. Our founding story and vision were recently profiled in Forbes , highlighting Sage Care’s mission to build an “air-traffic-control system for healthcare.” With a strong customer pipeline, Sage Care is transforming healthcare by simplifying care navigation. Our platform makes it easier for patients to find the right doctor, helps providers focus on those who need them most, and ensures faster access to care. By harnessing clinically grounded AI and real-time optimization, we improve operational efficiency, increase system capacity, and deliver better patient outcomes at scale. About the Role We are seeking a highly capable ITOps/DevOps Engineer (Contractor) to support the foundational operations critical to our rapid growth. This role will be responsible for owning day-to-day IT operations, managing internal systems, and supporting core DevOps workflows across the engineering organization. This position is contract-based with the opportunity to convert to a full-time role as responsibilities expand and our infrastructure matures. The ideal candidate thrives in fast-moving environments, enjoys building scalable internal systems, and is excited to contribute to the operational backbone of an AI-driven healthcare platform. Key Responsibilities IT Operations & Internal Systems Lead onboarding/offboarding, including device provisioning, access management, and account setup. Manage and administer internal tools (Google Workspace, Slack, Teams, Vanta, Notion, etc.). Roll out and maintain our centralized identity provider across all applications. Ensure proper access controls and permission structures across internal and production environments. Support day-to-day IT troubleshooting for the engineering and business teams. DevOps Support Assist with basic Terraform tasks (imports, module updates, resource standardization). Support CI/CD workflows and help maintain deployment pipelines across environments. Help manage production cloud infrastructure and ensure environments remain secure, reliable, and well-maintained. Operational Stability & Risk Reduction Centralize application administration to reduce risk caused by decentralized ownership and unmanaged billing. Identify and resolve inefficiencies in internal workflows, permissions, and system access. Partner closely with DevOps to improve internal reliability, security posture, and automation. Cross-Functional Support Work closely with Engineering teams to streamline internal processes. Collaborate with internal stakeholders to ensure smooth operations as the company scales. Assist in building documentation and internal runbooks to support repeatable, reliable workflows. Qualifications Required 3–5+ years of experience in IT or a hybrid ITOps/Engineering role. Strong experience with identity and access management (Okta, Google Workspace, SSO/SAML). Hands-on experience with device management, MDM tools, and IT automation. Familiarity with cloud platforms (GCP preferred; AWS/Azure acceptable). Experience with Terraform or other IaC tools. Understanding of networking fundamentals, VPN configuration, and basic security practices. Strong communication skills and the ability to support technical and non-technical stakeholders. Nice to Have Experience supporting HIPAA or other compliance frameworks (SOC2, ISO27001). Prior experience in a healthcare, startup, or high-growth environment. Familiarity with Kubernetes, CI/CD pipelines, and containerized environments. Background in managing centralized billing, license allocations, or SaaS procurement. What Success Looks Like in This Role Smooth, reliable onboarding/offboarding processes with consistent access management. Reduced IT-related friction across engineering and business teams. Improved stability of internal systems through standardized processes and documentation. Clear path to converting into a full-time ITOps/DevOps role as Sage Care grows.

Posted 1 week ago

American Credit Acceptance logo
American Credit AcceptanceMeridian, Idaho
Description Position Title: DevOps Manager Department: Technology / DevOps Location: Boise, Idaho Reports To: Director of IT Position Summary The DevOps Manager will lead a high-performing team responsible for automation, cloud infrastructure, and CI/CD pipeline management across multiple environments. This role is central to enabling development velocity, improving system reliability, and driving innovation through AI-assisted DevOps practices. The ideal candidate is a strong communicator, experienced in AWS and Atlassian toolsets, and skilled at balancing hands-on technical leadership with effective project and people management. Key Responsibilities Lead, mentor, and grow a team of DevOps Engineers and Cloud Platform specialists. Oversee design, implementation, and maintenance of CI/CD pipelines across multiple applications and environments. Integrate AI and automation technologies to improve efficiency, reduce manual intervention, and enhance system observability. Partner with Application Development, QA, and Infrastructure teams to ensure smooth and secure deployments. Manage and optimize cloud infrastructure within AWS, ensuring scalability, cost efficiency, and adherence to security best practices. Administer and enhance Atlassian toolsets (Bitbucket, Bamboo, Jira, Confluence) to support the full SDLC. Define and track operational KPIs, including deployment frequency, lead time for changes, mean time to restore (MTTR), and change failure rate. Collaborate cross-functionally to support initiatives involving AI-driven monitoring, predictive alerting, and self-healing systems. Oversee change management processes and ensure compliance with security, regulatory, and audit standards. Lead or contribute to key DevOps and platform projects, ensuring timely delivery, clear communication, and measurable outcomes. Qualifications Bachelor’s degree in Computer Science, Information Technology, Engineering, or equivalent experience. 7+ years of DevOps or Cloud Engineering experience, including 2+ years in management or team lead capacity. Strong proficiency in AWS (EC2, ECS/EKS, Lambda, CloudFormation, IAM, S3, RDS, etc.). Must have h ands-on experience with CI/CD tools such as Bamboo, Bitbucket Pipelines, or GitHub Actions. Proven success implementing automation frameworks and infrastructure-as-code (IaC) using Terraform, Ansible, or CloudFormation. Experience managing Atlassian suite administration (Jira, Confluence, Bitbucket, Bamboo). Familiarity with AI-driven DevOps concepts, observability platforms (e.g., Datadog, Prometheus, Grafana), and ML-assisted automation. Excellent communication, leadership, and project management skills with the ability to translate complex technical topics into clear business language. Strong understanding of Agile, ITIL, and modern software development methodologies. Preferred Skills Experience with containerization and orchestration (Docker, Kubernetes). Background in Python, Go, or scripting for automation and tooling. Exposure to security and compliance frameworks (SOC 2, ISO 27001, NYDFS, etc.). Demonstrated success driving culture change and DevOps maturity across distributed teams. Work Environment and Physical Demands This job operates in a professional office environment. This role routinely uses standard office equipment such as computers, phones, photocopiers, filing cabinets and fax machines. Position Type/Expected Hours of Work This is a full-time position with a work schedule of Monday-Friday with occasional schedule adjustments for night and weekend work. Travel This position will require up to 10% travel. EEO Statement ACA provides equal employment opportunities (EEO) to all applicants for employment without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state and local laws. ACA complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. California Privacy Notice As an employer of California residents, we are dedicated to protecting your privacy rights. Any personal information you provide during the application process will be used solely for permitted internal purposes and will be handled in accordance with applicable privacy laws. By applying to this position, you consent to the collection, use, and disclosure of your personal information as described in our Employee Privacy Notice .

Posted 30+ days ago

ICF logo
ICFReston, Virginia

$81,094 - $166,810 / year

* Please note: This role is contingent upon a contract award. While it is not an immediate opening, we are actively conducting interviews and extending offers in anticipation of the award. The Work: Our Health Engineering Solutions team works side by side with customers to articulate a vision for success, and then make it happen. We know success doesn't happen by accident. It takes the right team of people, working together on the right solutions for the customer. We are looking for a seasoned DevOps Engineer who will be a key driver to make this happen. In this position, you will be part of the team building best in class health care reporting service. Learn and grow using AWS Infrastructure, DevSecOps, Agile Scrum, incremental delivery philosophy with highly supportive peers constantly sharing subject matter expertise. Job Location: Remote; however, this position requires that the job be performed in the United States and there will be travel of approximately 10% to a customer site. If you accept this position, you should note that ICF does monitor employee work locations and blocks access from foreign locations/foreign IP addresses and prohibits personal VPN connections. Our core work hours are 9am - 5pm Eastern Time with the option to start earlier or work later depending on your time zone. What You Will Do: Implement best in class cloud-based solutions in AWS using infrastructure as code Deploy, setup, and run infrastructure configurations for various AWS services, utilizing Infrastructure as Code such as Terraform Engage with technical stakeholders including but not limited to application development, networking, infrastructure, information security, risk, enterprise identity and access management, and security operations Enable and optimize the automation of application and infrastructure environments Be part of a team where you collaborate to build cloud infrastructure, with an understanding of AMI, Containers and serverless functions Develop, maintain and improve continuous integration/continuous delivery (CI/CD) pipelines for delivering features, fixes and system updates in development, integration and production environments. Set up, integrate, and maintain a scalable, stable set of CI/CD tools to support development, testing, and security scanning. Implement Amazon CloudWatch, Splunk and other third party monitoring solutions to provide continuous monitoring capabilities, track all aspects of the system, infrastructure, performance, application errors and roll up metrics. Analyze functional and non-functional business requirements, translate them into technical operational requirements, and propose CI/CD pipelines with tools and plugins. What You Will Bring With You: Bachelor's degree in computer science: Information Systems, Engineering or other related scientific or technical discipline 3+ years of experience in setting up CI/CD Pipelines with integration with open-source plugins. 3+ years of experience in DevOps/Agile/Scrum environments and development. 5+ years of strong hands-on experience with configuration management, cloud orchestration and automation tools with AWS environments. 5+ years’ experience with provisioning and managing infrastructure as well as applications in AWS cloud environments. 2+ years of experience with identifying and implementing automation for Continuous Integration/Continuous Deployment. 3+ years’ experience writing infrastructure as code using Terraform Candidate must be able to obtain and maintain a Public Trust Candidate must reside in the U.S., be authorized to work in the U.S., and all work must be performed in the U.S. Candidate must have lived in the U.S. for three (3) full years out of the last five (5) years What We Would Like You To Bring With You: Experience designing and implementing automated monitoring capabilities to generate dashboards with trends, useful messages, and immediate notifications, and provide real-time metrics using Splunk or similar services. Knowledge of multi-account architecture, leveraging tools such as AWS Control Tower, SCPs, GuardRails, and Transit Gateways Wide technology experience that may include cloud architecture, cloud migrations, applications development, networking, security, storage, analytics, or machine learning AWS Solution Architect (Associate or Pro) certification. Familiar with standard concepts, practices, and procedures such as NIST, FISMA, FedRamp and Common Criteria regulations and standards. Familiarity with the MLOps, machine learning lifecycle and product landscape, for example: Amazon SageMaker, Apache Airflow, Looker, Trifacta etc. You don't need to be an expert in all these. Working knowledge of Linux Professional Skills: Excellent communication and interpersonal skills to interface effectively at all levels of the business. Highly effective analytical, problem-solving, and decision-making capabilities. #DMX-HES #Li-cc1 #Indeed Working at ICF ICF is a global advisory and technology services provider, but we’re not your typical consultants. We combine unmatched expertise with cutting-edge technology to help clients solve their most complex challenges, navigate change, and shape the future. We can only solve the world's toughest challenges by building a workplace that allows everyone to thrive. We are an equal opportunity employer . Together, our employees are empowered to share their expertise and collaborate with others to achieve personal and professional goals. For more information, please read our EEO policy. We will consider for employment qualified applicants with arrest and conviction records. Reasonable Accommodations are available, including, but not limited to, for disabled veterans, individuals with disabilities, and individuals with sincerely held religious beliefs, in all phases of the application and employment process. To request an accommodation, please email Candidateaccommodation@icf.com and we will be happy to assist . All information you provide will be kept confidential and will be used only to the extent to provide needed reasonable accommodations. Read more about workplace discrimination righ t s or our benefit offerings which are included in the Transparency in (Benefits) Coverage Act. Candidate AI Usage Policy At ICF, we are committed to ensuring a fair interview process for all candidates based on their own skills and knowledge. As part of this commitment, the use of artificial intelligence (AI) tools to generate or assist with responses during interviews (whether in-person or virtual) is not permitted . This policy is in place to maintain the integrity and authenticity of the interview process. However, we understand that some candidates may require accommodation that involves the use of AI. If such an accommodation is needed, candidates are instructed to contact us in advance at candidateaccommodation@icf.com . We are dedicated to providing the necessary support to ensure that all candidates have an equal opportunity to succeed. Pay Range - There are multiple factors that are considered in determining final pay for a position, including, but not limited to, relevant work experience, skills, certifications and competencies that align to the specified role, geographic location, education and certifications as well as contract provisions regarding labor categories that are specific to the position. The pay range for this position based on full-time employment is : $81,094.00 - $166,810.00Nationwide Remote Office (US99)

Posted 1 day ago

Medeloop logo
MedeloopSan Francisco, California
About Medeloop Medeloop is creating the future of clinical operations and health research through cutting-edge AI and big data technologies. Our unified platform, spanning AI-powered analytics, study management, and grant automation, streamlines the entire research lifecycle, enabling faster, smarter, and more impactful discoveries across medicine and public health. Recognized by Politico as the “ AI Disrupter-in-Chief ” for healthcare and public health, Medeloop is trusted by premier institutions across government, academia, and life sciences. From major healthcare centers to leading life science companies, our partners rely on Medeloop to unlock insights that were previously out of reach. At the heart of our platform is one of the largest and most diverse health data ecosystems in the industry with over 100 million patient records that fuel the work of AI “scientists” purpose-built to drive breakthroughs in health equity, drug development, chronic disease, and more. Interested candidates can review a demo of one of our AI scientist research pipelines and read about our mission on our Linkedin . We are a fast-growing company backed by world-class investors including General Catalyst, Icon Ventures, Inovia Capital, and Healthier Capital. Our team includes leaders in AI, life sciences, and medical research (such as the former editor-in-chief of JAMA, the team who wrote the most-read scientific publication in medicine for 2023 and public health for 2018, and the creators of BloombergGPT ) who bring unmatched expertise and vision to our mission. The company is led by serial entrepreneurs with a proven track record. We're not just building tools; we're building a better future. By accelerating research timelines and expanding access to insights, Medeloop empowers the next generation of researchers to deliver faster cures, smarter policy, and ultimately, save lives. Join us as we build the future of science. We are seeking a highly skilled and motivated Senior DevOps Engineer to join our dynamic team. The ideal candidate will have a strong background in cloud infrastructure, automation, and continuous integration/continuous deployment (CI/CD) practices. This role will be critical in building and maintaining the infrastructure that supports our clinical research products, ensuring scalability, reliability, and security. Role & Responsibilities: Design, implement, and manage scalable, secure, and reliable cloud infrastructure using AWS. Develop and maintain CI/CD pipelines to automate the deployment and testing of applications and services. Oversee, maintain, and collaborate to fix issues caught by SAST, DAST, and related scanning. Implement and manage infrastructure as code (IaC) using tools like AWS CDK or CloudFormation. Monitor, troubleshoot, and optimize system performance and reliability. Ensure the security and compliance of our infrastructure by implementing best practices and security measures. Collaborate with development teams to integrate DevOps practices into the software development lifecycle. Develop and maintain comprehensive documentation for infrastructure and DevOps processes. Stay up-to-date with the latest advancements in DevOps, cloud technologies, and industry best practices to enhance our solutions continually. Requirements: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 5+ years of experience as a DevOps engineer, preferably in the healthcare industry. Proficiency in cloud platforms and services, particularly AWS (e.g., EC2, S3, Lambda, RDS). Strong experience with CI/CD tools like GitHub Actions. Expertise in infrastructure as code (IaC) tools like AWS CDK or CloudFormation. Knowledge of containerization and orchestration tools such as Docker and Kubernetes. Experience with monitoring and logging tools such as Sentry, CloudWatch, and others. Knowledge of scripting languages such as Python, Bash, or PowerShell. Strong understanding of networking, security, and compliance best practices. Very proactive, positive stance and taking initiative without supervisor request. Excellent problem-solving skills and the ability to work independently as well as in a team. Strong communication and collaboration skills. Passion for improving healthcare outcomes through technology. Nice To Have: Familiarity with healthcare data standards, compliance, and protocols such as HIPAA, HL7 FHIR, OMOP, i2b2.

Posted 30+ days ago

Obviant logo
ObviantArlington, Virginia
DevSecOps / Platform Engineer Arlington, VA — Full Time The defense market is surging, but the data that drives it hasn’t kept up. Companies, government, and investors are forced to perform heavily manual processes and piece together hundreds of disparate sources to make decisions. Obviant is building a data source of truth and AI tools for defense acquisition to solve this. We fuse information from thousands of sources – structured + unstructured – to provide a cohesive picture of budget, programs, the organizations running them, and much more. Whether it’s a company navigating GTM or a program manager developing capabilities, we’re providing all sides with the intelligence they need to execute effectively. We’re growing fast and backed by top funds and DoD/national security veterans. We believe that public sector mission sets matter above anything else. If you feel the same way, we’d love for you to join us. The Role As a DevSecOps / Platform Engineer at Obviant , you will build and operate the foundational systems that power our data ingestion pipelines, application infrastructure, and secure deployment environments. You'll work across cloud infrastructure, CI/CD, container orchestration, and security automation to ensure our platform is reliable, scalable, and compliant with the needs of defense users. This is a hands-on role with deep ownership. You’ll collaborate closely with engineering, data, and product teams to develop the infrastructure that supports fast iteration, an expanding feature surface, and mission-critical workflows for government stakeholders. We move fast, simplify complexity, and build systems that scale. Responsibilities Design, implement, and operate secure, cloud-native infrastructure that powers Obviant’s core platform Build and maintain CI/CD pipelines that enable high-velocity, high-reliability shipping across teams Work with containerized workloads (Docker, Kubernetes) to automate deployments and manage environments Develop Infrastructure-as-Code frameworks to standardize and scale system provisioning Implement and uphold DevSecOps best practices—hardening images, managing vulnerabilities, and automating security controls Collaborate with full-stack engineers and data teams to support ingestion pipelines, new product workflows, and user-facing features Troubleshoot infrastructure issues in real time, identify root causes, and drive long-term improvements to reliability and performance Participate in research and development to improve automation, observability, build tooling, and operational efficiency Contribute directly to how government technology is built and delivered by shaping infrastructure strategy end-to-end What You Bring 3+ years of experience operating cloud infrastructure (AWS preferred; GCP/Azure welcome) Strong knowledge of containerization and orchestration (Kubernetes, EKS, Helm, etc.) Experience implementing Infrastructure as Code (Terraform, Pulumi, CloudFormation, etc.) Hands-on experience building and maintaining CI/CD pipelines Understanding of container/image hardening and vulnerability management Proficiency in at least one scripting language (Typescript, Go, etc…) Experience debugging distributed systems and automating workflows Strong communication skills—clear, concise, and collaborative Ability to thrive in ambiguity, move quickly, and drive outcomes with high ownership Passion for national security and mission-driven work Comfortable with the pace and expectations of a fast-growing startup Bonus: Experience supporting government or regulated environments Exposure to IL4/IL5, compliance frameworks, or security accreditation processes Kubernetes certifications (CKA, CKAD) AWS Solutions Architect certification Experience with multi-cloud networking and container hardening Active or ability to obtain a security clearance Our Working Style — Why You Might Love It Here You care about government & are mission-oriented- Our work is important, and is critical to improving a system that impacts us all. Perseverance and endurance- Hard problems are worth solving, and solving them can take a long time. There is no such thing as exhausting all options, it’s just time to look for new ones. Empowerment > micro-management – We’re building a culture of high-performers. Our job is to equip them with what they need and eliminate roadblocks for them to succeed. We trust their judgment, skills, and experience from there. We’re collaborative and communicate well- Constructive dialogue that takes all viewpoints into account is the only way we get to the right decision. Respect, trust, and complete transparency with each other is critical - keep it all in the open You’re really good at what you do… but it speaks for itself – High output, no ego. Being humble is extremely important to us You don’t mind change and are comfortable with uncertainty- We’re deliberate about setting goals, but we’re comfortable changing course and dealing with discomfort to get there. We’re still figuring things out, and that demands being flexible and iterative. Work doesn’t feel like “work” to you – We’re passionate about what we’re going after, and we devote more time to it than a typical 9-5. That often means putting in extra time at night and occasionally on weekends. However, maintaining your own personal balance comes above all else, and you should establish that however you need to - flexible schedule, taking advantage of time off, or anything else you need. You like to move fast and have a bias towards action- Our roadmap is directional at this stage - speed and a feeling of urgency is key to prove it out. We expect each other to proactively determine what needs to get done and go for it. Integrity is never negotiable – Transparency, honesty, and respect comes above all else. Benefits & Structure We’re a tight-knit team headquartered in Arlington, VA. We work in the office together most days, and believe being in the same place is a competitive advantage. Flexible schedule- We all have other things going on in our lives. Doctor visits, kids’ activities, dog walks - take care of it whenever you have to. And work from home when you need to. Competitive compensation+ Sizeable equity- We’re building something with massive upside potential, and you’ll have ownership in that. This is ours. Flexible vacation time- Use what you want, as long as you’re taking care of what needs to get done. Full health, dental, and vision insurance. And more…

Posted 4 days ago

T logo
Tek SpikesDallas, Texas
Description Lead DevOps Engineer – Dallas, TX (Local to TX Only | In-Person Interview Required) Location: Dallas, TX (Onsite – Local Texas candidates ONLY) Interview: Must be available for in-person interview Visa: US Citizens & Green Card Holders only Experience: 10+ years (Lead-level) 💼 Job Description We are looking for a Lead DevOps Engineer (10+ years experience) to drive platform automation, cloud infrastructure engineering, and CI/CD modernization for large-scale enterprise applications. The ideal candidate will have deep hands-on DevOps expertise, strong cloud experience, and a proven track record leading teams, implementing DevOps best practices, and delivering scalable environments. This is an onsite role in Dallas, TX and requires candidates who are already local to Texas and comfortable with in-person interviews . 🔥 Key Responsibilities Lead strategy, architecture, and execution of DevOps initiatives across multiple application teams. Design, implement, and optimize enterprise-grade CI/CD pipelines end-to-end. Architect and manage highly available cloud infrastructure on AWS / Azure / GCP (customizable). Drive automation for builds, deployments, testing, monitoring, and security. Implement and maintain Infrastructure as Code using Terraform, CloudFormation, or similar tools. Oversee containerization and orchestration with Docker and Kubernetes (EKS/AKS/GKE) . Collaborate with dev, QA, and architecture teams to enhance release velocity and stability. Establish monitoring, logging, and observability using Prometheus, Grafana, ELK, Splunk, Datadog , etc. Manage production issues, identify root causes, and enforce system reliability best practices. Mentor and guide junior and mid-level DevOps engineers in best practices and technical decisions. 🧰 Required Skills & Experience 10+ years of experience in DevOps, Cloud Engineering, or SRE roles. Proven Lead-level experience driving DevOps initiatives and leading teams. Strong hands-on skills with: Cloud: AWS, Azure, or GCP CI/CD: Jenkins, GitHub Actions, GitLab CI, Azure DevOps Containers: Docker, Kubernetes IaC: Terraform, CloudFormation, Ansible Scripting: Python, Bash, Shell Strong understanding of cloud networking, security, scalability, and automation. Experience supporting large-scale distributed systems and microservices. Excellent communication, leadership, and collaboration skills. 💯 Preferred Qualifications Experience with service mesh (Istio, Linkerd). Experience in enterprise, financial, healthcare, or telecom environments. Cloud certifications (AWS DevOps, CKA/CKAD, Azure DevOps) preferred but not required. 📨 Notes Local Texas candidates only (no exceptions). Must be willing to attend onsite, in-person interviews . US Citizens & Green Card Holders only as per client requirement.

Posted 1 week ago

C logo
Credit GenieNew York, New York
Company Credit Genie is a mobile-first financial wellness platform designed to help individuals take control of their financial future. We leverage artificial intelligence to provide personalized insights and are building a financial ecosystem by offering tools and services that provide instant access to cash, and building credit. Our goal is to empower every customer to achieve long-term financial stability. Founded in 2019 by Ed Harycki , former Swift Capital Founder ( acquired by PayPal in 2017 ). Backed by Khosla Ventures and led by industry pioneers from companies such as; PayPal, Square, and Cash App, we are well positioned to build the future of inclusive finance through cutting-edge technology and customer-centric solutions. We are seeking a DevOps Engineer to design, build, and maintain scalable, reliable, and secure cloud infrastructure. This role will be instrumental in automating deployments, optimizing performance, and improving system resilience, working closely with engineering, security, data and AI teams to enable seamless operations. What you’ll do Design, implement, and manage cloud-based infrastructure (AWS) to ensure scalability and resilience. Implement robust incident response strategies. Define and monitor SLOs, and SLAs to ensure alignment with business goals and user expectations; leverage insights from these metrics to improve reliability and inform strategic decisions. Monitor and improve system reliability, availability, and performance, implementing best practices for high uptime. Build and maintain CI/CD pipelines to enable fast and secure deployments for our engineering, data and AI/ML Teams. Implement observability tools, including monitoring, logging, and alerting solutions, to proactively identify and resolve issues. Automate infrastructure provisioning, configuration management, and deployments using AWS CDK and Terraform, or similar tools. Collaborate with our security team to enforce best practices across infrastructure, including IAM, encryption, vulnerability scanning, and incident response planning. Work with security and compliance teams to ensure adherence to regulatory requirements. Conduct disaster recovery and business continuity planning, ensuring rapid response and system recovery. Requirements 5+ years of experience in DevOps, SRE, or cloud infrastructure roles. Strong expertise in cloud computing (AWS, GCP, or Azure), containerization (Docker, Kubernetes), and automation (Terraform, AWS CDK or equivalent). Strong knowledge of Linux systems, networking, and security best practices. Proficiency in monitoring, logging, and alerting tools such as Prometheus, Grafana, ELK, or Datadog. Experience with incident response, root cause analysis, and performance optimization. Familiarity with fintech security and compliance regulations is a plus. Strong scripting or programming skills (Python, Go, or Bash) for automation. Benefits and Perks Our goal is to provide a comprehensive offering of benefits and perks that promote better financial, mental, and physical wellness. We believe working alongside each other in person is the best way to build a great product and foster a strong company culture. Our expectation is that employees are in the office five days a week, allowing for optimal collaboration, inclusivity, and productivity. At the same time, we understand that life happens and recognize the importance of flexibility. We are committed to supporting our employees when circumstances arise that require remote work or adjusted schedules. Our goal is to ensure everyone can effectively balance personal and professional responsibilities while maintaining our collaborative and productive environment. Here are some highlights of our benefits and perks offerings, feel free to ask your recruiting partner for more details on our comprehensive offering for employees. 100% company-paid medical, dental, and vision coverage for you and your dependents on your first day of employment. Monthly fitness reimbursement up to $100 or a full membership to LifeTime Fitness 401(k) with a 2.5% match and immediate vesting Meal program for breakfast, lunch, and dinner Life and accidental insurance Flexible PTO Your actual level and base salary will be determined on a case-by-case basis and may vary based on the following considerations: job-related knowledge and skills, education, and experience. Base salary is just one part of your total compensation and rewards package at Credit Genie. You may also be eligible to participate in the bonus and equity programs. You will also have access to comprehensive medical, vision, and dental coverage, a 401(k) retirement plan with company match, short & long term disability insurance, life insurance, and flexible PTO along with many other benefits and perks. Credit Genie is a proud Equal Opportunity Employer where we welcome and celebrate differences. We are committed to providing a workspace that is safe and inclusive, where everyone feels supported, connected, and inspired to do their best work. If you require any accommodations to participate in our recruitment process, please inform us of your needs when we contact you to schedule an interview.

Posted 30+ days ago

N logo
NTT DATA, Europe & LATAM, Branch in USAMiami, Florida
Job Title: Senior Lead DevOps Engineer Location: Miami, FL (Hybrid – 3 days onsite per week) Duration: 1 Year (Renewable) Role Summary We are seeking a Senior Lead DevOps Engineer to lead the design and implementation of end-to-end CI/CD, SRE, and observability frameworks across hybrid cloud environments. This role will drive modernization of the software delivery lifecycle, enhance reliability, and improve deployment automation for mission-critical systems. Responsibilities Lead the design, automation, and implementation of CI/CD pipelines across multiple environments. Collaborate with cross-functional teams to deliver DevOps strategies and best practices. Implement observability and SRE frameworks using tools such as Grafana, Prometheus, DataDog, and Kibana. Define infrastructure automation using Terraform or Ansible for both on-prem and cloud-native environments. Manage DevOps governance, security, and compliance with RBAC and identity management solutions. Ensure high availability, resilience, and monitoring of production systems. Mentor junior DevOps engineers and establish operational excellence standards. Requirements 8+ years of experience in DevOps, CI/CD, and SRE practices. Deep expertise with Jenkins, GitLab CI/CD, Azure DevOps, GitHub Actions, or Bamboo. Strong knowledge of cloud technologies (AWS, Azure, or GCP). Hands-on experience with observability tools (Grafana, Prometheus, DataDog, Kibana). Proven experience designing and implementing API Management solutions (Apigee, Kong, Microsoft APIM). Proficiency with Infrastructure as Code (Terraform, Ansible). Familiarity with SDLC governance, version control, and automation pipelines. Strong problem-solving and collaboration skills in enterprise environments. Certifications Required: AWS Certified DevOps Engineer – Professional OR Azure DevOps Engineer Expert Preferred: Kubernetes Administrator (CKA), Terraform Associate, ITIL Foundation, DevOps Foundation Additional Notes Candidate must have proven leadership in DevOps transformation. Excellent communication skills in English; Portuguese preferred.

Posted 30+ days ago

Zuma logo

Staff Engineer (Backend, DevOps, Infrastructure)

ZumaSan Francisco Bay Area, California

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

About Zuma
Zuma is pioneering the future of agentic AI and our focus is to transform the rental market experience for consumers and property manager alike. Our innovative platform is engineered from the ground up to boost operations efficiency and enhance support capabilities for property management business across the US and Canada, a ~$200B market.
Off the back of our Series-A in early 2024, Zuma is scaling rapidly. Achieving our vision requires a team of passionate, innovative individuals eager to leverage technology to redefine customer-business interactions. We're on the hunt for exceptional talent ready to join our mission and contribute to building a groundbreaking technology that reshapes how businesses engage with customers.
Zuma has raised over $17M in funding to date and has support from world-renowned investors, including Andreessen Horowitz (a16z), Y Combinator, King River, Range Ventures, and distinguished angel investors like YC’s former COO, Qasar Younis.
As a Staff Engineer, you will:
Help define how humans collaborate with intelligent systems in one of the largest and most underserved industries in the world: property management.
You’ll shape the technical foundation of a platform that is not just supporting human workflows, but executing them autonomously through AI agents. This is a rare opportunity to influence how an entire industry evolves, building tools that transform repetitive operational tasks into seamless, intelligent experiences.
Your work will directly contribute to how trust is built between humans and machines, how operations scale without added headcount, and how residents and staff experience a new, AI-powered standard of service.
We’re not just building software we’re designing AI that people want to work with. Delightful, trustworthy, and deeply effective.
Join us to help lead the AI revolution in multifamily, drive meaningful real-world impact, and be part of reimagining what work can feel like when done side-by-side with intelligent agents.
You will be a cornerstone of our engineering organization, reporting to the VPE. This is a pivotal role where you'll lead critical system rewrites, architect scalable foundations for our AI platform, and establish the technical standards that will shape our engineering culture for years to come.
You'll work at the intersection of cutting-edge LLM technology and practical business applications, creating sophisticated systems that power our AI leasing agent while building self-serve experiences that enable rapid customer onboarding. As our first US-based engineer, you'll bridge the gap between our product vision and technical implementation.
This role offers a rare opportunity to directly influence how we architect the next generation of our platform. You'll tackle projects like rebuilding our onboarding/configuration system to be self-serve, creating robust analytics infrastructure to measure AI performance, and reimagining our integration framework to connect seamlessly with customer systems. Your work will significantly reduce manual engineering overhead while enabling rapid scaling of our customer base.
We're looking for a Staff Engineer to help us bring that future to life.

Why This Could Be Your Dream Role

    • You'll work directly with cutting-edge LLM technology in a real-world application
    • You want to work at a company where customers feel your impact every day
    • You'll architect AI-powered systems that are transforming the real estate industry
    • You'll have autonomy to design and implement innovative technical solutions
    • Your work will directly impact thousands of apartment communities and millions of renters
    • You'll receive significant equity in a venture-backed company with strong traction
    • As we scale, your role and influence will grow with the company

Why You Might Want to Think Twice

    • This is a demanding role that will often require extended hours and deep commitment
    • As a founding team member, you'll need to wear multiple hats and step outside your comfort zone
    • You'll need to make thoughtful tradeoffs between innovation and immediate needs
    • You'll interact directly with customers to understand their needs and occasionally travel to their offices
    • We're a startup - priorities can shift rapidly as we respond to market opportunities and customer needs
    • If you're not comfortable getting your hands dirty with legacy code or speaking directly with customers, this isn't the job for you

Responsibilities

    • Lead critical system rewrites to transform our architecture into a highly scalable, resilient foundation
    • Own the design and performance optimization of our data storage systems, ensuring they scale with customer and AI demands
    • Build and evolve our deployment pipelines, enabling reliable, automated releases for AI-first products
    • Set up and manage modern cloud infrastructure from scratch, leveraging Infrastructure as Code (IaC) to ensure consistency, security, and scalability
    • Establish engineering best practices, including observability, incident response processes, and system hardening for an AI-first platform
    • Drive robust analytics and monitoring to track performance, reliability, and the effectiveness of our AI solutions
    • Mentor engineers and elevate the team's capabilities across infrastructure, scalability, and AI product development

Your Experience Looks Like

    • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field
    • 5+ years of experience building production-grade software systems, with a focus on scalability, performance, and reliability
    • Proven expertise in backend development with Node.js, including API design, system architecture, and cloud-based services
    • Experience with cloud infrastructure (AWS, GCP, or similar) and deploying production systems using Infrastructure as Code (e.g., Terraform, Pulumi)
    • Hands-on experience with database design, performance tuning, and scaling high-throughput data systems
    • Familiarity with building and maintaining CI/CD pipelines, automated testing, and modern DevOps practices
    • Strong communication skills and ability to work effectively in a distributed, fast-paced environment
    • Comfortable operating in early-stage, high-ownership environments with evolving requirements
    • Bonus: Experience with React and TypeScript on the frontend, though this role leans backend/infrastructure
    • Bonus: Exposure to LLM-based systems, AI infrastructure, or agentic AI workflows

Guiding Principles

      Customer‑First Outcomes
      Every commit should trace back to resident or operator value. Whether it’s a new feature, infra investment, or AI capability, if it doesn’t solve a real problem, it doesn’t ship.
      Bias for Simplicity
      We favor composable primitives over clever abstractions. Open standards, clean APIs, and clear contracts win over custom complexity, even if the custom version is cooler.
      Quality Is a Gate, Not an After‑Thought
      Quality is built-in from day one. Our definition of done includes: test coverage, performance checks, basic observability, and internal docs. Shipping fast doesn’t mean skipping craftsmanship.
      Data‑Driven Choices
      We use data to guide, not paralyze, our decision-making. We track leading indicators (cycle time, defect rate, NPS) and lagging signals (retention, revenue impact). We keep instrumentation lightweight but meaningful signal over spreadsheet.
      Transparency & Written Culture
      Good ideas don’t expire in Zoom. We operate in public inside the company, TDDs, PR reviews, and Linear tickets tell the story. This keeps us async-friendly, auditable, and aligned across time zones and functions.

Other Benefits

    • Great health insurance, dental, and vision.
    • Gym and workspace stipends.
    • Computer and workspace enhancements.
    • Unlimited PTO.
    • Company off-sites with the team.
    • Opportunity to play a critical role in building the foundations of the company and Engineering culture.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall