Auto-apply to these devops jobs

We've scanned millions of jobs. Simply select your favorites, and we can fill out the applications for you.

D logo
DomynNew York, NY
We are looking for a talented DevOps Engineer to join our US team and help power the next generation of enterprise AI. As our company delivers full-stack AI applications to some of the world’s leading financial institutions, your work will be critical in ensuring these solutions run flawlessly in the most demanding environments. In this role, you’ll provide technical leadership and hands-on expertise to keep our infrastructure secure, reliable, and high-performing. You’ll be responsible for optimizing deployments, identifying improvements, creating and executing upgrade strategies. Proven ability to deliver solutions that meet enterprise-grade requirements will be essential. You’ll collaborate closely with our engineering, product, and customer teams to streamline continuous delivery of software and AI applications. As we scale deployments across Google Cloud Platform, Microsoft Azure, Amazon Web Services, and on-premises environments, you’ll play a central role in maintaining, optimizing, and supporting the systems that make it all possible. This is an opportunity to work at the forefront of AI adoption in financial services, solving complex challenges and shaping the infrastructure behind innovative enterprise solutions. Responsibilities: Implement and manage DevOps tools, processes, and infrastructure. Collaborate with development and operations teams to design and implement continuous integration and deployment pipelines, Interact and collaborate with clients, to provide technical support and guidance to ensure their satisfaction Automate infrastructure provisioning, configuration, and monitoring. Ensure high availability and performance of our software systems. Identify bottlenecks and implement solutions to improve system performance. Monitor and troubleshoot production issues and provide timely resolution Maintain and enhance system security and data protection measures. Stay up to date with industry best practices and emerging technologies for DevOps. Requirements What You Have At least 6 years of experience as Senior DevOps Engineer or a similar role Excellent knowledge of Cloud Services (GCP/Azure/AWS) Experience deploying to on-prem hardware. Experience with containerization technologies such as Docker and Kubernetes. Experience with version control systems such as Git. Strong skills in defining and implementing network architecture, IAM policies, HA setups and backups. Expertise with Kubernetes and in any Continuous Delivery platform Proficiency in Linux and Bash Strong experience in setup and management of cloud architectures through Infrastructure As Code (Terraform) Experience with Postgres DB and monitoring systems (Datadog, Elastic Cloud, Dynatrace) Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to multitask and prioritize tasks in a fast-paced environment. Fluency in English What Would Be Nice To Have Good knowledge of programming languages as Java, Python, Javascript GCP/Azure/AWS Certifications Previous experience in financial services sector Who You Are Passionate about the digital world Problem solver Collaborative mindset Versatile with new domains and latest technology buzzword Benefits Perks Domyn offers a competitive compensation structure, including salary, performance-based bonuses, and additional components based on experience. All roles include comprehensive benefits as part of the total compensation package. About Domyn Domyn is a company specializing in the research and development of Responsible AI for regulated industries, including financial services, government, and heavy industry. It supports enterprises with proprietary, fully governable solutions based on a composable AI architecture — including LLMs, AI agents, and one of the world’s largest supercomputers. At the core of Domyn’s product offer is a chip-to-frontend architecture that allows organizations to control the entire AI stack — from hardware to application — ensuring isolation, security, and governance throughout the AI lifecycle. Its foundational LLMs, Domyn Large and Domyn Small, are designed for advanced reasoning and optimized to understand each business’s specific language, logic, and context. Provided under an open-enterprise license, these models can be fully transferred and owned by clients. Once deployed, they enable customizable agents that operate on proprietary data to solve complex, domain-specific problems. All solutions are managed via a unified platform with native tools for access management, traceability, and security. Powering it all, Colosseum — a supercomputer in development using NVIDIA Grace Blackwell Superchips — will train next-gen models exceeding 1T parameters. Domyn partners with Microsoft, NVIDIA, and G42. Clients include Allianz, Intesa Sanpaolo, and Fincantieri. Please review our Privacy Policy here https://bit.ly/2XAy1gj .

Posted 30+ days ago

K logo
Keeper Security, Inc.El Dorado Hills, CA
Keeper Security is hiring a Senior DevOps Program Manager to lead complex, cross-functional initiatives that shape and advance Keeper’s global cloud infrastructure, automation capabilities, and compliance posture. This role is ideal for a hands-on technical program leader who excels at bridging engineering execution with strategic planning. Keeper’s cybersecurity software is trusted by millions of people and thousands of organizations, globally. Keeper is published in 21 languages and is sold in over 120 countries. Join one of the fastest-growing cybersecurity companies and bring your IL5 DevOps expertise to mission-critical work. About Keeper Keeper Security is transforming cybersecurity for organizations globally with zero-trust privileged access management built with end-to-end encryption. Keeper’s cybersecurity solutions are FedRAMP and StateRAMP Authorized, SOC 2 compliant, FIPS 140-2 validated, as well as ISO 27001, 27017 and 27018 certified. Keeper deploys in minutes, not months, and seamlessly integrates with any tech stack to prevent breaches, reduce help desk costs and ensure compliance. Trusted by millions of individuals and thousands of organizations, Keeper is the leader for password, passkey and secrets management, privileged access, secure remote access and encrypted messaging. Learn how our zero-trust and zero-knowledge solutions defend against cyber threats at KeeperSecurity.com . About the Job The Senior DevOps Program Manager will orchestrate large-scale cloud, automation, and infrastructure initiatives that span Engineering, Security, and Operations. This role is responsible for driving the execution of Keeper’s infrastructure roadmap, enabling modern DevOps practices, and ensuring that all programs align with zero-trust, zero-knowledge, and compliance-driven requirements. This position requires a blend of deep technical understanding and strong program leadership. You will manage cross-team dependencies, define program priorities, enhance CI/CD and automation workflows, strengthen observability practices, and ensure alignment with compliance frameworks including FedRAMP, SOC 2, and ISO 27001. Responsibilities Lead end-to-end execution of complex DevOps and infrastructure programs, including cloud modernization, CI/CD optimization, automation, and security integrations Partner with Engineering, Security, Compliance, and Product leadership to define program strategy, priorities, and success criteria Oversee large-scale cloud initiatives across AWS and other platforms, ensuring scalability, cost efficiency, and operational resilience Coordinate Infrastructure-as-Code (IaC) initiatives using Terraform and related automation tooling Drive improvements across CI/CD pipelines (GitHub Actions, Jenkins, etc.) to reduce deployment friction and enhance reliability Champion best practices in automated testing, security scanning, and release governance Integrate compliance and security-by-design principles into all DevOps programs, ensuring alignment with FedRAMP, SOC 2, ISO 27001, and similar standards Collaborate closely with security engineering and the CISO to ensure program-level compliance and audit readiness Oversee observability, SRE, and monitoring initiatives to enhance system visibility, performance, and incident response Define SLIs, SLOs, and error budgets in partnership with Engineering and Security teams Serve as a cross-functional liaison, ensuring consistent communication, dependency tracking, and alignment across teams Manage program timelines, risks, and stakeholder expectations across multiple initiatives Work with Agile, Waterfall, or hybrid methodologies to ensure effective delivery depending on program needs Identify and adopt emerging technologies that strengthen Keeper’s cloud, automation, and monitoring capabilities Requirements Bachelor’s degree in Computer Science, Engineering, or a related technical field (Master’s preferred) 8+ years of experience in DevOps, Cloud Infrastructure, or Site Reliability Engineering 4+ years of technical program or project management experience in a SaaS, cloud-native, or cybersecurity environment Proven experience with AWS cloud services (ECS, EKS, Lambda, RDS, S3, IAM, etc.) Hands-on familiarity with Terraform, Docker, Kubernetes, and CI/CD tooling such as GitHub Actions or Jenkins Strong understanding of automation, cloud security, and modern DevOps best practices Demonstrated success managing cross-functional initiatives in high-security or compliance-driven environments Experience with observability platforms (Datadog, Prometheus, Grafana) and incident management workflows Excellent communication skills with the ability to guide stakeholders, influence decisions, and manage expectations Preferred Qualifications AWS Certified DevOps Engineer or similar cloud certifications Experience working within FedRAMP, DoD, or regulated government environments Exposure to secrets management or privileged access management systems Experience with DevSecOps automation and compliance-as-code tooling PMP certification Benefits Medical, Dental & Vision (inclusive of domestic partnerships) Employer Paid Life Insurance & Employee/Spouse/Child Supplemental life Voluntary Short/Long Term Disability Insurance 401K (Roth/Traditional) A generous PTO plan that celebrates your commitment and seniority (including paid Bereavement/Jury Duty, etc) Above market annual bonuses Keeper Security, Inc. is an equal opportunity employer and participant in the U.S. Federal E-Verify program. We celebrate diversity and are committed to creating an inclusive environment for all employees. Classification: Exempt

Posted 30+ days ago

N logo
Newcode.aiNew York, NY
Who are we? At Newcode.ai, we’re on a mission to reshape how organizations put AI to work in their day-to-day operations. In few months, we’ve moved from vision to reality—building products our clients truly love. As part of our fast-growing and highly ambitious team, you won't just drive the future of AI—you’ll help define it. Who are we looking for? As a Senior DevOps Engineer at Newcode.ai, you’ll architect and maintain the core infrastructure that powers our cutting-edge AI solutions. You will work hands-on with modern cloud technologies—such as Terraform , Kubernetes (AKS) , Helm , and Azure Cloud —designing robust CI/CD pipelines with Github Actions and enabling secure, scalable deployments of microservices and multi-tenant architectures. You’ll play a pivotal role in shaping our engineering processes and system architecture from the ground up. As a key member of our fast-paced, collaborative team, you’ll enjoy the autonomy to drive best practices, influence critical decisions, and deliver high-impact solutions from day one. If you’re passionate about infrastructure-as-code, automation, and performance at scale, you’ll thrive here. Requirements What You'll Do Build and maintain end-to-end features: Develop scalable APIs and intuitive user interfaces that delight our users. Work collaboratively: Partner with Frontend and Backend specialists to create seamless, high-performance products. Own your craft: Take responsibility for code quality, architecture, and the overall development lifecycle. Innovate and iterate: Move quickly, experiment boldly, and help shape the evolution of our platform. What You’ll Do Design, build, and maintain scalable infrastructure: Build and manage cloud environments and continuous deployment pipelines to support our AI-powered applications. Collaborate across teams: Work closely with software engineers and product managers to ensure seamless, robust, and secure deployments. Champion best practices: Take ownership of system reliability, automation, security, and infrastructure architecture. Drive innovation: Quickly experiment with new tools, optimize workflows, and help evolve our DevOps capabilities as our platform grows. Who You Are Seasoned DevOps Expert: 7+ years of experience architecting, deploying, and maintaining production environments for web applications. Technical Depth: 5+ years with Terraform 5+ years with Kubernetes (AKS) 3+ years with Azure Cloud 3+ years with Helm 2+ years with CI/CD pipelines (preferably GitHub Actions) Proven experience setting up Microservices and multi-tenant architecture Startup Mindset: Thrive in fast-paced, dynamic environments and are comfortable wearing multiple hats. AI-Curious: Excited to learn and support cutting-edge AI solutions and related technologies. Benefits Why Newcode.ai? Join a collaborative, high-energy team where your ideas are heard and your impact is real. Help design, build, and launch products shaping the future of artificial intelligence. Work flexibly, in English, from anywhere in the EU. At Newcode.ai, you don’t just see the future—you help create it.Ready to shape what’s next? Apply today and join us in building a smarter, more efficient world powered by AI.

Posted 30+ days ago

Axiom Software Solutions Limited logo
Axiom Software Solutions LimitedFrisco, TX
Role: DevOps Engineer Location: Frisco TX – Onsite Position Type: Contract Job Description: • We are seeking a highly skilled and motivated DevOps Engineer to join our growing team. The ideal candidate will have strong problem-solving abilities, proficiency in Infrastructure as Code (IaC) using Terraform and Ansible, and experience in automating GitLab pipelines. This role requires a deep understanding of AWS and Azure cloud services, as well as expertise in shell and Python scripting. Key Responsibilities: • Design, Develop, and Implement IaC: Create and maintain Infrastructure as Code using Terraform and Ansible to ensure efficient and reliable deployment of resources. • Pipeline Automation: Develop and manage GitLab CI/CD pipelines to automate the build, test, and deployment processes, ensuring high-quality software delivery. • Cloud Management: Architect, deploy, and manage scalable, secure, and highly available infrastructure on AWS and Azure. • Scripting and Automation: Write and maintain shell and Python scripts to automate routine tasks, improve system efficiency, and support operational processes. • Compliance Assurance: Ensure that infrastructure and deployments comply with industry standards and regulations, implementing necessary controls and documentation to maintain compliance • Collaboration and Support: Work closely with development, QA, and operations teams to troubleshoot issues, optimize performance, and ensure seamless integration and deployment. • Monitoring and Optimization: Implement and maintain monitoring solutions to ensure system health and performance, and proactively address potential issues. • Documentation and Best Practices: Document processes, configurations, and procedures, and promote best practices in infrastructure and deployment management. • Continuous Improvement: Stay updated with industry trends, tools, and technologies to continuously improve the DevOps practices and infrastructure. Required Skills and Qualifications: • Problem-Solving Skills: Strong analytical and problem-solving skills to identify, diagnose, and resolve technical issues efficiently. • IaC Proficiency: Hands-on experience with Infrastructure as Code tools such as Terraform and Ansible. CI/CD Expertise: Extensive experience with GitLab CI/CD pipeline automation. • Cloud Platforms: Proficient in managing AWS and Azure cloud environments, including services like EC2, S3, RDS, Azure VMs, Azure Blob Storage, etc. • Scripting Languages: Proficiency in shell scripting and Python for automation and system management tasks. Self-Motivated and Collaborative: Highly self-motivated with a strong collaborative mindset to work effectively in a team-oriented environment. • Communication Skills: Excellent verbal and written communication skills to articulate technical concepts and solutions clearly. Depending on the work environment, the subject matter expert may lead or be an active participant of a work-group with the need for specialized knowledge. • Meet all agreed-upon turnaround times for deliverables, deliverable reviews, or deliverable sign-off Understands, articulates and implements best practices related to his area of expertise. • Provides guidance on how his area of capability can resolve an organizational need and actively participates in all phases of the solution life cycle. Design Solutions and best practices to meet clients objective. • Work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Close.

Posted 30+ days ago

Resonance logo
ResonanceNew York, NY
About Us Resonance is a technology company building a more sustainable and valuable fashion industry for designers, brands, manufacturers, consumers, and the planet. The company’s AI-powered operating system, ONE, enables brands to design, sell, and make in that order – empowering designers to operate with no unnecessary inventory and eliminating the financial and environmental burdens of the legacy fashion industry. Resonance ONE is our end-to-end platform that powers every aspect of an apparel brand’s business, constantly learning and optimizing how garments are designed, sold, and made. Headquartered in New York City and Santiago, Dominican Republic, Resonance has partnered with more than 30 brands – including THE KIT and Rebecca Minkoff – to create garments that use 97% less dye, 70% less water, and 50% less material than any other fashion brand — and immediately eliminate overproduction. Want to know more? Visit our website and read articles about us . About the Role We’re looking for a talented DevOps Engineer to join our remote team and help scale the sophisticated infrastructure behind Resonance ONE. As a DevOps Engineer at Resonance, you will play a critical role in designing, building, and maintaining a complex full-stack platform that underpins everything from digital design tools to e-commerce and manufacturing automation. Our stack spans a wide range of modern technologies – from machine learning services (OpenAI and other ML models) to a robust cloud backend (AWS infrastructure, AWS Lambda), data and analytics systems (Hasura GraphQL engine, Snowflake data warehouse, Looker BI), event streaming (Kafka), and orchestration tools (Kubernetes with Argo Workflows, plus integrations with tools like Airtable) – all working in concert to realize our mission. In this role, you will ensure these diverse components work together in harmony, securely and at scale. You’ll have the opportunity to shape and implement scalable DevOps practices and systems from the ground up in a forward-thinking, AI-driven organization. You will collaborate closely with software engineers, data scientists, and product teams to continuously improve our development pipeline, deployment processes, and infrastructure automation. This is a unique chance to tackle challenging problems in an architecture that pushes the boundaries of technology – all while enabling fashion brands to innovate without waste. Responsibilities Architect and Maintain Cloud Infrastructure : Build, maintain, and scale our AWS cloud infrastructure using infrastructure-as-code and modern CI/CD pipelines (e.g. Argo Workflows). Ensure reliable, automated deployments of our applications and machine learning services across development, staging, and production environments. Container Orchestration : Manage our Kubernetes clusters and containerized microservices, optimizing for high availability, security, and efficient resource usage. Continuously improve our cluster deployment, scaling strategies, and rollback processes to support a rapidly growing platform. CI/CD & Automation : Design and implement continuous integration and delivery pipelines that empower our development team to ship code and ML model updates quickly and safely. Automate routine operations and workflows, reducing manual work through scripts, AWS Lambda functions, and other automation tools. Monitoring & Reliability : Implement robust monitoring, logging, and alerting (using tools like Prometheus, CloudWatch, etc.) to proactively track system performance and reliability. Quickly troubleshoot and resolve infrastructure issues or bottlenecks across the stack to maintain high uptime and responsive services. Data & Pipeline Integration : Work closely with our data engineering team to support a seamless flow of data through the platform. Maintain and optimize our event streaming and pipeline architecture (Kafka) and its integration with downstream systems like our Snowflake data warehouse and Looker analytics, ensuring data is delivered accurately and on time. AI/ML Infrastructure : Collaborate with machine learning engineers to deploy and scale AI/ML models in production. Support the integration of OpenAI and other ML models into our applications, implementing the infrastructure (compute, storage, containers) needed for model training, inference, and monitoring model performance in a live environment. Tool Integration & Support : Integrate and manage internal and third-party tools that extend our platform’s functionality – for example, maintaining our Hasura GraphQL engine that interfaces with databases, or automating workflows involving external services like Airtable. Ensure these tools are properly deployed, updated, and aligned with our security and compliance standards. DevOps Best Practices & Culture : Champion DevOps best practices across the engineering organization. This includes improving our release processes (e.g. implementing GitOps workflows), optimizing build/test pipelines, and mentoring developers on using infrastructure tools. You will continually evaluate new technologies and processes to enhance deployment speed, reliability, and scalability, while balancing rapid iteration with operational stability. Requirements Minimum Requirements Experience : 5+ years of experience in DevOps, SRE, or related infrastructure engineering roles, with a track record of managing complex, distributed systems at scale. Cloud Proficiency : Strong expertise in AWS and cloud architecture (compute, storage, networking, and security). You have designed and maintained scalable infrastructure using services like EC2/ECS/EKS, S3, RDS, VPC, and Lambda, and you understand how to build secure and cost-efficient cloud environments. Containers & Orchestration : Hands-on experience with containerization and orchestration – you have managed production Kubernetes clusters (or similar orchestration platforms), and you’re comfortable with Docker and container lifecycle management. CI/CD & Automation : Proven ability to create and manage CI/CD pipelines using tools such as Jenkins, CircleCI, GitHub Actions, or Argo. You automate workflows wherever possible and have experience implementing GitOps or similar practices to streamline deployments. Infrastructure as Code : Proficiency in scripting and infrastructure-as-code (Terraform, CloudFormation, or equivalent). You can manage infrastructure configuration in a reproducible way and have experience automating cloud resource provisioning. Monitoring & Troubleshooting : Solid knowledge of monitoring and logging frameworks (e.g. Prometheus, Grafana, ELK stack, CloudWatch) and experience setting up alerts and dashboards. You excel at diagnosing issues across the full stack – from network and infrastructure to application logs – and ensuring high reliability. Data Pipeline Familiarity : Familiarity with event-driven architecture and data pipelines. You have worked with messaging or streaming systems (e.g. Kafka, Kinesis) and understand how to connect various data stores and services (relational and NoSQL databases, data warehouses like Snowflake) in a production environment. Security Mindset : Good understanding of security best practices in cloud and DevOps (managing secrets, IAM roles, VPC security, etc.). You are vigilant about maintaining compliance and protecting sensitive data across all systems. Collaboration & Communication : Excellent communication skills and a collaborative attitude. You can work effectively on a remote, cross-functional team, partnering with software engineers, data scientists, product managers, and QA to achieve common goals. Adaptability : Self-driven and adaptable to change. You thrive in fast-paced, ambiguous environments and take ownership of delivering results. You prefer simple, elegant solutions and have a knack for prioritizing what will scale and add value, in line with our mission to deliver results and delight our users. Preferred Qualifications Startup / 0→1 Experience : Experience working in a startup or building systems from scratch. You’re comfortable with the scrappiness and ingenuity required to design new infrastructure and processes in a rapidly evolving environment. MLOps & AI Services : Exposure to MLOps or AI-driven platforms. Experience deploying or managing machine learning models in production, or familiarity with ML frameworks and services (e.g. handling model serving, working with OpenAI or similar AI APIs) is a strong plus. Data & Analytics Tools : Experience with data warehousing and analytics tools – for example, deploying or maintaining Snowflake, or integrating BI platforms like Looker into a data pipeline. Understanding of how to optimize data flows and query performance in such systems is a plus. GraphQL / Hasura : Familiarity with GraphQL APIs and frameworks (especially Hasura). You understand how GraphQL layers interface with backend databases and can optimize or troubleshoot in such an environment. Orchestration & Serverless : Experience with workflow orchestration tools like Argo Workflows (or similar, e.g. Airflow, Tekton) for running complex jobs/pipelines. Experience managing serverless functions (AWS Lambda) as part of a larger system is also beneficial. Domain Interest : A passion for our mission of sustainability and transforming the fashion industry. Interest or experience in e-commerce, manufacturing processes, or fashion technology is a plus – you enjoy applying technology to solve real-world problems in new domains. Benefits Compensation & Benefits : We offer full benefits (medical, dental, and vision) and a competitive salary, along with equity participation. You’ll be joining a passionate team with a shared mission and ample opportunities for growth. Remote Work : This is a fully remote position. We embrace a remote-first culture that allows you to work from anywhere, while staying closely connected with a diverse, global team. (Periodic travel to our NYC or Dominican Republic hubs for team gatherings is optional/occasional.*) Mission-Driven Culture : Work on something meaningful – every feature you help ship and every system you optimize contributes to eliminating waste in the fashion industry and driving sustainable innovation. We foster a creative, inclusive environment where new ideas are encouraged. Equal Opportunity Employer : Resonance Companies is an equal opportunity employer and values diversity in our company. We do not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status, or any other status protected by applicable law. All employment decisions are based on qualifications, merit, and business need. (Note: The role is remote; any mention of travel or specific location is flexible and can be adjusted based on company policy.)

Posted 30+ days ago

Flosum logo
FlosumSan Ramon, CA
About Flosum Flosum is the #1 Salesforce-native DevSecOps and Data Protection platform, purpose-built to meet the rigorous demands of enterprise Salesforce environments. Our solution suite—spanning DevOps, Backup & Archive, and Security—enables Salesforce teams to ship faster, stay compliant, and protect critical data with confidence. We’re growing fast, and we're looking for a Mid-Market Account Executive to join our Bay Area team and take ownership of selling our Backup and Archive platform to Salesforce customers across North America. About the Role As an Inside Sales Representative, you will be responsible for the full sales cycle—from prospecting and discovery to demo, negotiation, and close. You’ll work closely with Solution Engineers, and Sales Leadership to develop pipeline and guide prospects through a high volume, technical buying process. This role requires someone who is technical, self-driven, and thrives in ambiguity, with a passion for helping customers solve real business challenges through data protection and automation. Requirements What You’ll Be Doing Own a monthly and quarterly quota focused on Flosum’s Backup & Archive product Run a full-cycle sales motion: discovery, product demo, value alignment, negotiation, and close Deliver live product demos and confidently articulate technical value to developers, architects, and compliance stakeholders Understand technical buyer needs around APIs, metadata, storage limits, archival workflows, field history tracking, and compliance frameworks (SOX, HIPAA, etc.) Collaborate cross-functionally with Product, Customer Success, and Engineering to ensure smooth customer handoffs and expansion opportunities Maintain up-to-date records in Salesforce and contribute to forecast accuracy Represent Flosum at select industry events and meet-ups (up to 10% travel) Who You Are 2–3+ years of closing experience in B2B SaaS sales (ideally mid-market or enterprise) Familiarity with data protection, DevOps, IT infrastructure, or security solutions is a big plus Experience selling to technical buyers—developers, architects, IT, and InfoSec Comfortable discussing API integrations, data storage strategies, and compliance requirements Strong live demo and discovery skills Highly organized, self-managed, and able to prioritize competing demands Internally motivated by curiosity, learning, and continuous improvement Humble, collaborative, and emotionally intelligent Located in the San Francisco Bay Area and available to work on-site regularly. Bonus Points For Salesforce ecosystem experience (partner, ISV, or consulting background) Certified in a modern sales methodology (e.g., MEDDIC, Challenger, Sandler) History of exceeding quota and hitting accelerators Experience with tools like Salesforce, Outreach, Gong, ZoomInfo, and LinkedIn Sales Navigator Benefits Sell into a fast-growing, high-demand category: Salesforce data protection & DevOps Work with a product customers love and a team that wins together Fast-track opportunity into enterprise sales or leadership Competitive compensation, equity, and uncapped commission Clear, performance-based career path

Posted 30+ days ago

O logo
OnMedWhite Plains, NY
Who We Are and Why Join Us At OnMed our purpose is simple but powerful...to improve the quality of life and sense of well-being in our communities by bringing access to healthcare to everyone, everywhere. Our path to everywhere has already begun, with our innovative CareStation, a small but mighty, Clinic-in-a-Box, bringing #healthcareaccess anywhere with an outlet to plug it in. Poised to become a key component in America’s public health infrastructure, the OnMed CareStation is the only tech-enabled, human-led, hybrid care solution that combines the comprehensive experience, trust and outcomes of a clinic, with the rapid scalability of virtual care. At OnMed, every role, everyday, is directly impacting the communities we serve. You’ll join a high-performing purpose-driven team, innovating to break down the barriers that keep people from the care they need. This is not just a job...it's a movement to bring access to healthcare where and when people need it most. It’s healthcare that shows up. Who You Are As a DevOps Engineer at OnMed, you will play a pivotal role in shaping the future of our CareStation applications - streamlining our processes, instituting automation, improving our software delivery speed and quality and enhancing collaboration across engineering, development, IT and Security to help build pipelines that are scalable, secure and robust. Role Responsibilities Infrastructure & Automation Design, build, and maintain scalable and secure cloud infrastructure and automated CI/CD pipelines for building, testing, and deploying applications. Develop automation solutions using Ansible or equivalent configuration management tools (e.g., Chef, Puppet) and scripting languages (Bash, Python) to configure and manage bare-metal and Azure cloud resources Manage and maintain source code repositories. Systems Reliability & Performance Monitor system performance, uptime, reliability, and capacity. Troubleshoot deployment and production issues, conduct root cause analysis, and drive incident response and postmortem processes. Implement observability best practices (logging, metrics, tracing) to identify and resolve performance bottlenecks. Own the build and deployment process for services and applications across development, staging, and production environments. Improve release reliability through automated testing, configuration management, and continuous integration strategies. Security & Compliance Work closely with application development teams to integrate security best practices into the development and deployment pipeline to ensure compliance and protect applications and data. Integrate security best practices into DevOps workflows, including vulnerability scanning, secrets management, and compliance automation. Create and maintain technical documentation for infrastructure, processes, and configurations. Requirements Knowledge, Skills & Abilities Deep knowledge of Azure cloud services and how to manage, configure, and deploy resources within the platform. Proficiency with Configuration Management tools like Ansible, Chef, and Puppet. Experience with containerization technologies like Docker, orchestration tools such as Kubernetes, and build management systems like Bazel, or language specific systems like Maven for Java, Cargo for Rust, CMake for C/C++ Deep knowledge of IaC tools such as Terraform to automate configuration and deployment of environments. Proficiency in scripting languages like Bash, PowerShell, or Python for automation. Experience with Azure DevOps and GitHub Actions. Understand GitOps, GitFlow, Branching, Tagging, Release Management. Knowledge of PowerShell, Azure CLI, Azure Bicep or Yaml is a plus. Knowledge of relational databases, application security best practices, and modern DevOps workflows is critical. Familiarity with Angular, Typescript, JavaScript, C#, .NET environments. Exceptional problem-solving skills. Ability to work independently as well as collaboratively. Must be able to work and thrive in a fast-paced, dynamic. Education & Experience Bachelor's degree in Computer Science or equivalent; Master’s degree preferred. 5+ years of relevant work experience as a DevOps Engineer with exposure to IT management and Security. Benefits OnMed provides a competitive salary and benefits package, including unlimited PTO and paid holidays. The base salary for this role is up to $150,000 commensurate with the candidate's experience, plus an annual discretionary performance bonus. OnMed is a proud equal opportunity employer. All qualified applicants will be considered without regard to race, color, creed, religion, gender, sexual orientation, national origin, genetic information, disability, age, marital status, veteran status, or any other category protected by law. #LI-HYBRID

Posted 2 weeks ago

I logo
iSoftTek Solutions IncChicago, IL
DevOps Consultant Location: Chicago, IL (Onsite) Duration: 06 months Years: 12   Must haves: • Retail / Food services industry experience • Mobile application experience (iOS / Android) • Client management skills – not just individual contributor DevOps experience • Specific release management experience in addition to regular DevOps, including working with stakeholders at all levels in the business     Job Description: Software Development Cycle Focus   • Primary Focus:  End of the development cycle, specifically release engineering.   Key Areas: • Build process automation • Continuous integration and continuous delivery (CI/CD) • Release preparation and automation • Post-release monitoring and issue resolution • Collaborating with DevOps for infrastructure   Tasks and Responsibilities • Automate the entire release preparation cycle to eliminate manual testing and release prep. • Ensure reliable, reproducible builds and consistent build results. • Manage and enhance CI/CD pipelines using tools like Jenkins, GitHub Actions, or similar. • Implement and maintain automated testing frameworks using Selenium, Appium, etc. • Collaborate with development, QA, and DevOps teams to identify and resolve release issues. • Monitor post-release metrics, analytics, and feedback to identify and resolve issues. • Create and maintain dashboards and monitoring tools for release performance and issues. • Stay up-to-date with emerging trends and technologies in release engineering and automation. Business Value Contribution • Efficiency: Reduce the two-week release preparation time, accelerating the development cycle. • Reliability: Ensure higher quality releases with fewer post-release issues. • Scalability: Enhance the ability to scale release processes across multiple environments and teams. • Innovation: Introduce new tools and methodologies to stay ahead in the automation and release engineering space.   Metrics for Accountability • Reduction in release preparation time. • Number of successful automated builds and deployments. • Decrease in post-release issues and bugs. • Improvement in release cycle time and frequency. • User feedback and satisfaction related to release quality.   Critical Background Experience • Strong DevOps or release engineering experience (4+ years). • Hands-on experience with CI/CD practices and tools. • Proficiency in automated testing tools (Selenium, Appium). • Familiarity with cloud-based platforms (AWS, GCP). • Experience with both manual and automated testing, preferably in the mobile domain. • Strong analytical, troubleshooting, and problem-resolution skills. • Engineering manager mindset with recent hands-on engineering experience.   Role Growth and Thought Leadership • Initial Growth: • Master the existing Client mobile ecosystem and release processes. • Optimize and automate current release workflows. • Long-term Growth: • Lead the implementation of cutting-edge release automation technologies. • Mentor and guide junior engineers in best practices for release engineering. • Represent the organization at industry conferences and forums. • Creating Influence: • Publish articles and case studies on successful automation projects. • Participate in and contribute to industry groups and standards. • Host internal workshops and training sessions on release engineering and automation.

Posted 30+ days ago

A logo
Avalore, LLCAnnapolis Junction, MD
Supports a team of developers implementing multiple workflow products in a customer portfolio. Collaborates with system, software, and UI/UX engineers to design, develop, deploy, operate, and manage workflows hosted on a large scale, enterprise application built on the Pega Platform. Collaborates with system, DevOps, and UI/UX engineers to ensure workflows are security compliant, accessibility compliant, and meet minimum performance requirements. Requirements Bachelor’s degree in Computer Science or related technical degree from an accredited college or university. Four (4) years of additional experience may be substituted for a Bachelor’s degree; At least nine (9) years or software development experience MUST have experience with Pega development Clearance: Active TS/SCI with an appropriate current polygraph is required to be considered for this role; Ability to receive privileged access rights. Benefits Eligibility requirements apply. Employer-Paid Health Care Plan (Medical, Dental & Vision) Retirement Plan (401k, IRA) with a generous matching program Life Insurance (Basic, Voluntary & AD&D) Paid Time Off (Vacation, Sick & Public Holidays) Short Term & Long Term Disability Training & Development Employee Assistance Program

Posted 30+ days ago

Axiom Software Solutions Limited logo
Axiom Software Solutions LimitedJersey City, NJ
The Senior Technical Lead will be responsible for overseeing and leading the technical aspects of DevOps solutions, AWS architecture, and Java within the organization. They will play a key role in designing, implementing, and optimizing technical solutions to meet business requirements effectively. (1.) Key Responsibilities 1. Lead the design, development, and implementation of devops solutions, aws architecture, and java applications. 2. Collaborate with cross functional teams to ensure seamless integration of devops practices and aws services. 3. Provide technical guidance and mentorship to team members on best practices for devops and aws architecture. 4. Evaluate existing systems and processes to identify areas for improvement and implement necessary upgrades. 5. Monitor system performance and ensure reliability and scalability of applications in aws environment. 6. Troubleshoot technical issues related to devops and java applications, providing timely resolution. 7. Stay updated on industry trends and best practices in devops, aws, and java development technologies. Skill Requirements 1. Proficiency in devops solutions implementation and optimization. 2. Strong knowledge and experience in aws architecture, including services like ec2, s3, lambda, and rds. 3. Expertise in java programming and development, with the ability to write efficient code and troubleshoot issues. 4. Handson experience with ci/cd pipelines, containerization tools like docker, and automation tools like ansible. 5. Familiarity with infrastructure as code (iac) tools such as terraform or cloudformation. 6. Excellent problem-solving skills and the ability to work in a fast paced and dynamic environment. 7. Strong communication and leadership skills to effectively collaborate with team members and stakeholders.

Posted 30+ days ago

I logo
iSoftTek Solutions IncMountain View, CA
Job Title: Kubernetes DevOps Engineer Location: Mountain View, CA [Needs to be onsite for 1 week once a quarter on your own expenses] Job Type: W2 Duration: Long Term Note: Please apply if you are a USC/GC/EAD – Independent Candidates Only Must have: ·        10+ years of professional experience as Cloud/Infrastructure Engineer or similar role with at least 7+ years on GCP. ·        Strong Linux, and Shell scripting Experience. ·        Skills in GCP Containers as described in the below offerings especially the Kubernetes ·        Skills in GCP Application Integration/Management ·        Experience with Terraform ·        GCP Certified Professional ·        Knowledge of Cloud Infrastructure resources/concepts such as VPCs, Subnets, Firewalls etc specifically for GCP. ·        Experience with Distributed Systems, Kubernetes, Infrastructure-as-Code (IaC) & Cloud Security practices. ·        Hands-on Experience in Scripting/Programming languages like Python, and Shell scripting. ·        Strong Problem Solving & Troubleshooting skills. ·        Strong Written/Verbal communication skills with the ability to thrive in a remote work environment.   Kindly please share your resumes with srikar@isofttekinc.com or 707-435-3471  

Posted 30+ days ago

N logo
Newcode.aiPalo Alto, CA
Who are we? At Newcode.ai, we’re on a mission to reshape how organizations put AI to work in their day-to-day operations. In few months, we’ve moved from vision to reality—building products our clients truly love. As part of our fast-growing and highly ambitious team, you won't just drive the future of AI—you’ll help define it. Who are we looking for? As a Senior DevOps Engineer at Newcode.ai, you’ll architect and maintain the core infrastructure that powers our cutting-edge AI solutions. You will work hands-on with modern cloud technologies—such as Terraform , Kubernetes (AKS) , Helm , and Azure Cloud —designing robust CI/CD pipelines with Github Actions and enabling secure, scalable deployments of microservices and multi-tenant architectures. You’ll play a pivotal role in shaping our engineering processes and system architecture from the ground up. As a key member of our fast-paced, collaborative team, you’ll enjoy the autonomy to drive best practices, influence critical decisions, and deliver high-impact solutions from day one. If you’re passionate about infrastructure-as-code, automation, and performance at scale, you’ll thrive here. Requirements What You'll Do Build and maintain end-to-end features: Develop scalable APIs and intuitive user interfaces that delight our users. Work collaboratively: Partner with Frontend and Backend specialists to create seamless, high-performance products. Own your craft: Take responsibility for code quality, architecture, and the overall development lifecycle. Innovate and iterate: Move quickly, experiment boldly, and help shape the evolution of our platform. What You’ll Do Design, build, and maintain scalable infrastructure: Build and manage cloud environments and continuous deployment pipelines to support our AI-powered applications. Collaborate across teams: Work closely with software engineers and product managers to ensure seamless, robust, and secure deployments. Champion best practices: Take ownership of system reliability, automation, security, and infrastructure architecture. Drive innovation: Quickly experiment with new tools, optimize workflows, and help evolve our DevOps capabilities as our platform grows. Who You Are Seasoned DevOps Expert: 7+ years of experience architecting, deploying, and maintaining production environments for web applications. Technical Depth: 5+ years with Terraform 5+ years with Kubernetes (AKS) 3+ years with Azure Cloud 3+ years with Helm 2+ years with CI/CD pipelines (preferably GitHub Actions) Proven experience setting up Microservices and multi-tenant architecture Startup Mindset: Thrive in fast-paced, dynamic environments and are comfortable wearing multiple hats. AI-Curious: Excited to learn and support cutting-edge AI solutions and related technologies. Benefits Why Newcode.ai? Join a collaborative, high-energy team where your ideas are heard and your impact is real. Help design, build, and launch products shaping the future of artificial intelligence. Work flexibly, in English, from anywhere in the EU. At Newcode.ai, you don’t just see the future—you help create it.Ready to shape what’s next? Apply today and join us in building a smarter, more efficient world powered by AI.

Posted 30+ days ago

A logo
Aravo Solutions, Inc.Austin, TX
Aravo Solutions, Inc. is a global leader in third-party risk management, ESG, and vendor lifecycle management solutions. Our cloud-based platform empowers organizations of all sizes, from Fortune 100 to mid-level enterprises, streamline vendor management processes, mitigate risk, and drive strategic decision-making. We provide guidance globally for the most complex third-party networks in the world, helping them manage risk, achieve compliance, and protect their reputations. Join us at Aravo Solutions, where we are passionate about helping companies eliminate corruption and social injustice from their extended enterprises. You will have the opportunity to work alongside industry experts, leverage the latest technologies, and contribute to shaping the future of vendor management! Position Overview: The Aravo product development team places an emphasis on meaningful contribution by all members in a creative, collaborative, and sustainable environment where opportunities for growth, leadership, and recognition abound. The Engineering Leader will enhance and maintain a highly configurable, multi-tenant, enterprise class, SaaS solution that incorporates cross-organizational collaborative workflows, data integration, and a rich user interface built on a complex data model. This role will work primarily on driving Operational Excellence and improving our team’s Developer Experience through automation, visibility, and continuous improvement. Key Responsibilities: Evolve DevOps practices by creating a culture of change, gathering continuous feedback, and driving efficiency through automation. Manage and improve CI / CD pipelines and infrastructure as Code (IaC), focusing on engineering efficiencies, quality, and security, to accelerate deployment cycles. Partner with product architects and engineering teams to influence architecture and drive key technology decisions. Develop and execute a comprehensive strategy to ensure that all infrastructure, systems, and services are scalable, secure, and cost effective. Drive operational excellence by implementing best practices for observability and incident management with KPIs to measure effectiveness and drive continuous improvement initiatives. Ensure infrastructure, tools, software, and SaaS vendors adhere to security best practices and compliance requirements. Build, mentor, and support a team of engineers, creating a culture of continuous improvement and innovation that meets business needs. Assist in diagnosing and resolving customer issues, offering support for debugging and remediating issues. Participate in Agile/Scrum practices such as sprint planning, daily stand-ups, and retrospectives as well as managing project timelines and deliverables effectively. Provide 24/7/365 coverage to support internal and external customers. Requirements Qualifications: Bachelor's degree in Computer Science, Information Technology, or equivalent experience in related field. 8+ years of DevOps experience, 3+ years in a leadership role. Adept at designing and architecting solutions in self-hosted, private / public cloud, and hybrid environments (Rackspace, AWS, Azure). Previous experience implementing DevOps best practices, tools, and methodologies, including CI / CD pipelines (TeamCity, GitLabs, GitHub Actions, Jenkins) and IaC (Ansible, Terraform, AWS CloudFormation / CDK). Management of SaaS applications (Java, Spring Boot, Oracle), including monitoring, incident management, and triaging issues. Deep understanding of observability tools for monitoring, logging, and analysis (Datadog, Sumo Logic, New Relic, Prometheus, Grafana, ELK/EFK). Knowledge of networking principles and how it applies to data flow and security. Experience debugging complex applications in Linux environments. History of mentoring and growing effective engineering teams. Worked within Agile and Lean software development teams. Preferred Qualifications/Skills/Soft Skills: Results oriented, product focused, and at ease in an environment requiring the ability to quickly and appropriately prioritize conflicting demands. Strong analytical skills, and excellent verbal and written communications skills. High degree of initiative consistently demonstrated by the active ownership of complex problems through their successful resolution. Team player eager to work closely with, learn from, and mentor others while continually improving self and team. Innately curious about new technologies and their practical application in a startup environment. Flexibility and willingness to pitch in and wear more than one hat in a dynamic organization. Benefits 100% Employer Paid Medical Insurance options for the Employee and Family Paid Maternity and Paternity Leave Life and AD&D Insurance Long-Term Disability Insurance 401K with Company Matching Equity Participation 4 Weeks of Vacation Fully Stocked Kitchens Company-Sponsored Charitable Day of Giving Events ......and many more! Aravo Solutions Inc. is registered as an employer in many, but not all, states. If an applicant is not in or able to work from a state where Aravo Solutions Inc. is registered, they may not be eligible for employment. The eligible states include: FL, GA, MA, MO, NC, NH, NV, OR, PA, SC, TN, and TX.

Posted 2 weeks ago

TheIncLab logo
TheIncLabMcLean, VA
The Mission Starts Here TheIncLab engineers and delivers intelligent digital applications and platforms that revolutionize how our customers and mission-critical teams achieve success. Your Mission, Should You Choose to Accept We’re looking for a Senior DevOps Engineer who is passionate about automation, operational excellence, and building reliable, scalable infrastructure. This role blends core DevOps responsibilities with a strong emphasis on Site Reliability Engineering (SRE), helping ensure system uptime, performance, and observability across environments. The ideal candidate brings hands-on experience managing CI/CD pipelines, cloud infrastructure, and production operations, with a mindset oriented toward reducing toil and driving continuous improvement. What will you do? DevOps Engineering Build, maintain, and improve CI/CD pipelines using GitLab CI/CD or similar tools. Automate infrastructure provisioning, deployment, and maintenance using Terraform, Ansible, or related technologies. Collaborate with developers and QA to create reliable deployment paths from local dev to production. Implement infrastructure-as-code practices across environments (e.g., AWS, Kubernetes, bare-metal). Site Reliability Engineering (SRE) Design and implement monitoring, alerting, and observability systems to maintain high availability and performance. Respond to incidents, lead root cause analysis, and implement preventive measures. Establish and evolve SLOs/SLIs to ensure measurable system reliability. Participate in on-call rotation and help build automation to reduce the need for human intervention. Drive capacity planning, performance tuning, and cost optimization initiatives. System Operations Administer Linux (Ubuntu/Debian) and Windows-based infrastructure. Manage self-hosted GitLab instances and ensure secure, performant operation. Implement and enforce security best practices across infrastructure (IAM, RBAC, least privilege, etc.). Support both containerized and virtualized workloads across environments. Requirements Capabilities that will enable your success 5+ years in DevOps, SRE, or Infrastructure Engineering roles. Hands-on experience and proficiency with AWS services (EC2, S3, RDS, VPC, IAM, etc.) and infrastructure automation (Terraform, Ansible, or similar). Experience deploying and managing infrastructure using Terraform and/or Ansible. Solid knowledge of Linux system administration. Strong skills in Windows system administration environments. Proven experience managing and automating GitLab, including CI/CD pipelines. Proficiency in at least one programming or scripting language (Python, Bash, etc.). Experience implementing monitoring, logging, and alerting solutions (CloudWatch, Datadog, CloudTrail). Solid understanding of networking, security best practices, and high-availability system design. Familiarity with version control systems (Git) and GitLab workflows. Strong troubleshooting and incident response skills, with a focus on automation and root cause analysis. Ability to travel up to 20%. Preferred Qualifications AWS certification or equivalent practical experience. Knowledge of cloud cost optimization and efficiency practices. Experience with self-hosted Gitlab instances and CI/CD pipelines. Clearance Requirements Applicants must be a U.S. Citizen and willing and eligible to obtain a U.S. Security Clearance at the Secret or Top-Secret level. Existing clearance is preferred. Benefits At TheIncLab we recognize that innovation thrives when employees are provided with ample support and resources. Our benefits packages reflect that: Hybrid and flexible work schedules Professional development programs Training and certification reimbursement Extended and floating holiday schedule Paid time off and Paid volunteer time Health and Wellness Benefits include options for Medical, Dental, and Vision insurance along with access to Wellness, Mental Health, and Employee Assistance Programs. 100% Company Paid Benefits that include STD, LTD, and Basic Life insurance. 401(k) Plan Options with employer matching Incentive bonuses for eligible clearances, performance, and employee referrals. A company culture that values your individual strengths, career goals, and contributions to the team. About TheIncLab Founded in 2015, TheIncLab (“TIL”) is the first human-centered artificial intelligence (AI+X) lab. We engineer complex, integrated solutions that combine cutting-edge AI technologies with emerging systems-of-systems to solve some of the most difficult challenges in the defense and aerospace industries. Our work spans diverse technological landscapes, from rapid ideation and prototyping to deployment. At TIL, we foster a culture of relentless optimism. No problem is too hard, no project is too big, and no challenge is too complex to tackle. This is possible due to the positive attitude of our teams. We approach every problem with a “yes” attitude and focus on results. Our motto, “demo or die,” encompasses the idea that failure is not an option. We do all of this with a work ethic rooted in kindness and professionalism. The positive attitude of our teams is only possible due to the support TIL provides to each individual. At TIL, we believe that every challenge is an opportunity for growth and innovation. Our teams are encouraged to think outside the box and come up with creative solutions to complex problems. We understand that the path to success is not always straightforward, but we are committed to persevering and finding a way forward. Our culture of relentless optimism is not just about having a positive attitude; it is about taking action and making things happen. We believe in the power of collaboration and teamwork, and we know that by working together, we can achieve great things. Our teams are made up of individuals who are passionate about their work and dedicated to making a difference. Learn more about TheIncLab and our job opportunities at www.theinclab.com . Salary range guidance provided is not a guarantee of compensation. Offers of employment may be at a salary range that is outside of this range and will be based on qualifications, experience, and possible contractual requirements. This is a direct hire position, and we do not accept resumes from third-party recruiters or agencies.

Posted 30+ days ago

TheIncLab logo
TheIncLabNashville, TN
The Mission Starts Here TheIncLab engineers and delivers intelligent digital applications and platforms that revolutionize how our customers and mission-critical teams achieve success. Your Mission, Should You Choose to Accept We’re looking for a Senior DevOps Engineer who is passionate about automation, operational excellence, and building reliable, scalable infrastructure. This role blends core DevOps responsibilities with a strong emphasis on Site Reliability Engineering (SRE), helping ensure system uptime, performance, and observability across environments. The ideal candidate brings hands-on experience managing CI/CD pipelines, cloud infrastructure, and production operations, with a mindset oriented toward reducing toil and driving continuous improvement. What will you do? DevOps Engineering Build, maintain, and improve CI/CD pipelines using GitLab CI/CD or similar tools. Automate infrastructure provisioning, deployment, and maintenance using Terraform, Ansible, or related technologies. Collaborate with developers and QA to create reliable deployment paths from local dev to production. Implement infrastructure-as-code practices across environments (e.g., AWS, Kubernetes, bare-metal). Site Reliability Engineering (SRE) Design and implement monitoring, alerting, and observability systems to maintain high availability and performance. Respond to incidents, lead root cause analysis, and implement preventive measures. Establish and evolve SLOs/SLIs to ensure measurable system reliability. Participate in on-call rotation and help build automation to reduce the need for human intervention. Drive capacity planning, performance tuning, and cost optimization initiatives. System Operations Administer Linux (Ubuntu/Debian) and Windows-based infrastructure. Manage self-hosted GitLab instances and ensure secure, performant operation. Implement and enforce security best practices across infrastructure (IAM, RBAC, least privilege, etc.). Support both containerized and virtualized workloads across environments. Requirements Capabilities that will enable your success 5+ years in DevOps, SRE, or Infrastructure Engineering roles. Hands-on experience and proficiency with AWS services (EC2, S3, RDS, VPC, IAM, etc.) and infrastructure automation (Terraform, Ansible, or similar). Experience deploying and managing infrastructure using Terraform and/or Ansible. Solid knowledge of Linux system administration. Strong skills in Windows system administration environments. Proven experience managing and automating GitLab, including CI/CD pipelines. Proficiency in at least one programming or scripting language (Python, Bash, etc.). Experience implementing monitoring, logging, and alerting solutions (CloudWatch, Datadog, CloudTrail). Solid understanding of networking, security best practices, and high-availability system design. Familiarity with version control systems (Git) and GitLab workflows. Strong troubleshooting and incident response skills, with a focus on automation and root cause analysis. Ability to travel up to 20%. Preferred Qualifications AWS certification or equivalent practical experience. Knowledge of cloud cost optimization and efficiency practices. Experience with self-hosted Gitlab instances and CI/CD pipelines. Clearance Requirements Applicants must be a U.S. Citizen and willing and eligible to obtain a U.S. Security Clearance at the Secret or Top-Secret level. Existing clearance is preferred. Benefits At TheIncLab we recognize that innovation thrives when employees are provided with ample support and resources. Our benefits packages reflect that: Hybrid and flexible work schedules Professional development programs Training and certification reimbursement Extended and floating holiday schedule Paid time off and Paid volunteer time Health and Wellness Benefits include options for Medical, Dental, and Vision insurance along with access to Wellness, Mental Health, and Employee Assistance Programs. 100% Company Paid Benefits that include STD, LTD, and Basic Life insurance. 401(k) Plan Options with employer matching Incentive bonuses for eligible clearances, performance, and employee referrals. A company culture that values your individual strengths, career goals, and contributions to the team. About TheIncLab Founded in 2015, TheIncLab (“TIL”) is the first human-centered artificial intelligence (AI+X) lab. We engineer complex, integrated solutions that combine cutting-edge AI technologies with emerging systems-of-systems to solve some of the most difficult challenges in the defense and aerospace industries. Our work spans diverse technological landscapes, from rapid ideation and prototyping to deployment. At TIL, we foster a culture of relentless optimism. No problem is too hard, no project is too big, and no challenge is too complex to tackle. This is possible due to the positive attitude of our teams. We approach every problem with a “yes” attitude and focus on results. Our motto, “demo or die,” encompasses the idea that failure is not an option. We do all of this with a work ethic rooted in kindness and professionalism. The positive attitude of our teams is only possible due to the support TIL provides to each individual. At TIL, we believe that every challenge is an opportunity for growth and innovation. Our teams are encouraged to think outside the box and come up with creative solutions to complex problems. We understand that the path to success is not always straightforward, but we are committed to persevering and finding a way forward. Our culture of relentless optimism is not just about having a positive attitude; it is about taking action and making things happen. We believe in the power of collaboration and teamwork, and we know that by working together, we can achieve great things. Our teams are made up of individuals who are passionate about their work and dedicated to making a difference. Learn more about TheIncLab and our job opportunities at www.theinclab.com . Salary range guidance provided is not a guarantee of compensation. Offers of employment may be at a salary range that is outside of this range and will be based on qualifications, experience, and possible contractual requirements. This is a direct hire position, and we do not accept resumes from third-party recruiters or agencies.

Posted 30+ days ago

Northstrat logo
NorthstratFort Belvoir, VA
Northstrat is seeking a highly motivated Senior DevOps Engineer.  The ideal candidate will have strong working knowledge in Linux systems administration, and a background in Big Data solutions, configuration management, automation, scripting, PostgreSQL database administration, and AWS. The DevOps Engineer will be responsible for implementing infrastructure, automating deployment processes, and ensuring the reliability and scalability of our services.   If you have a passion for DevOps and are interested in working with a dynamic and innovative team, we encourage you to apply for this exciting opportunity.    Essential Job Responsibilities  Support development and deployment of infrastructure in AWS  Automate deployment processes and ensure reliability and scalability of services  Manage and maintain cloud infrastructure on AWS  Collaborate with development teams to integrate their applications into the infrastructure  Monitor and troubleshoot production systems and resolve issues as necessary  Continuously improve processes and tools to ensure high availability and performance  Stay current with new technologies and industry trends, continuously exploring new ways to improve our infrastructure  Other Duties as assigned  Requirements Must have TS/SCI U.S. Government Security Clearance is required; U.S. Citizenship required.   At least 9+ years of experience in DevOps Engineering and Bachelors in related field; or 7 years relevant experience with Masters in related field; or High School Diploma or equivalent and 13 years relevant experience.  Strong knowledge of Linux, including system administration and troubleshooting  Proficient in configuration management tools such as Ansible or Puppet  Knowledge of AWS services (EC2, S3, Lambda) and their application to deployment and management of infrastructure  Strong working knowledge of PostgreSQL databases, including administration and troubleshooting  Experience with application and OS deployment, scaling, and management  Ability to develop in multiple programming languages such as bash, Python, or Go  Familiarity with Git and other development tools such as deployment pipelines  Excellent problem-solving skills and the ability to identify and troubleshoot complex issues  Excellent oral and written communication skills.  Understanding of AGILE software development methodologies and use of standard software development tool suites  Must be able to work on customer site in Ft. Belvoir, VA 5 days/week  Preferred Requirements  Experience with big data technologies like: Hadoop, Spark, MongoDB, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc.  Experience with containers and Kubernetes are a plus  Benefits Work/Life Balance Northstrat values true work life balance. We offer power of choice benefits designed to best meet the needs of you and your lifestyle. Our benefits programs are designed to support and encourage wellness, healthy living, retirement investment, and lifetime learning. Pay Range There are a host of factors that can influence final salary including, but not limited to, geographic location, Federal Government contract labor categories and contract wage rates, relevant prior work experience, specific skills and competencies, education, and certifications. We also offer competitive compensation, benefits, and professional development opportunities. Please refer to our Benefits section for additional details.   Flex Time Northstrat does not mandate specific working hours. Although project requirements may dictate schedules, a Northstrat employee is only required to work an average of 8 hours per weekday over the course of a month. For example: John worked 12 hours on June 1st to meet a project deadline. On June 15th, John only worked 4 hours because he left early for a long weekend. John’s IBA was not debited for time off because flex time allowed him to carry over those 4 hours from June 1st. Individual Benefits Account (IBA) To attract and retain the highest quality staff, Northstrat provides a unique and versatile benefits package, the Individual Benefit Account (IBA), which places the power of choice in the hands of our greatest asset – the employee. The purpose of the IBA is to provide attractive benefits to all full-time employees of Northstrat on a flexible basis that enables each covered employee to select a package that best suits his or her needs. Whether those needs are paid time off, medical expenses, prescription drug expenses, cash disbursement, or a combination of any of these, the IBA provides flexibility to help you meet your specific goals. The IBA can be used for such things as: IBA Benefits accrue each month in the amount equivalent to 50% of the employee’s monthly compensation rate. That is, the effective dollar amount of this accrual is in addition to an employee’s salary. Profit Sharing Plan (PSP) The PSP is a qualified retirement plan that Northstrat funds quarterly on the employee’s behalf through the IBA in the amount equivalent to 25% (up to the IRS contribution limit) of the employee’s compensation. That is, of the 50% accrual in the IBA, half of the amount accrued is applied to the PSP. Stock Options Because Northstrat is an employee-owned company, all new employees are offered stock options. Employees have the opportunity to receive additional stock options based on accomplishment of individual performance goals. Stock owners elect the Board of Directors and are directly impacted by the success of the company. Lifelong Learning Our culture promotes and nurtures a growth environment. We hire and scale rapidly to meet the needs of our partner customers. Through periodic company sponsored training events, and the ability to use IBA funds for reimbursement of work-related education expenses you will have the opportunity to continually grow your skills and abilities. Join Our Talented Team We hire the BEST employees and value each one. Since 2021, The Washington Post has recognized Northstrat among its "Top Workplaces". We think that your friends and family will like it here too, so we offer employee referral incentives. Northstrat is an Equal Opportunity Employer We are committed to fostering an inclusive, diverse workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, disability, veteran status or other legally protected status.

Posted 30+ days ago

T logo
Two95 International Inc.Buffalo, NY
Job Title: Senior DevOps (Azure) Engineer Location: Buffalo, NY (Hybrid) Type: 1+ years Contract Rate: $Open /Market Requirements Job Requirements: A minimum of ten years of experience working in technology infrastructure and engineering. A minimum of five years leading efforts in DevOps with expert skills in automated code deployments. Solid knowledge of domain and industry tools (CNCF, DevOps, CI/CD, Secrets Management, and Container Registries). Solid experience deploying highly available applications in on-premises environments and Azure clouds. Must have demonstrated skills and experience writing Ansible playbooks for automation and building pipelines. Experience with infrastructure as code tools (e.g., Terraform, Ansible) Experience working with build, test, and deployment tools. Experience working with Source Control Management Systems. Experience working with container and container orchestration

Posted 30+ days ago

Truveta logo
TruvetaSeattle, WA
DevOps Engineer Truveta is the world’s first health provider led data platform with a vision of Saving Lives with Data. Our mission is to enable researchers to find cures faster, empower every clinician to be an expert, and help families make the most informed decisions about their care. Achieving Truveta’ s ambitious vision requires an incredible team of talented and inspired people with a special combination of health, software and big data experience who share our company values . This position is based out of our headquarters in the Greater Seattle area. #LI-onsite Who We Need Truveta is rapidly building a talented and diverse team to tackle complex health and technical challenges. Beyond core capabilities, we are seeking problem solvers, passionate and collaborative teammates, and those willing to roll up their sleeves while making a difference. If you are interested in the opportunity to pursue purposeful work, join a mission-driven team, and build a rewarding career while having fun, Truveta may be the perfect fit for you. This Opportunity Patients, doctors and medical researchers deserve to benefit from large scale technological innovations and digital intelligence that have driven progress in office productivity, entertainment, and social networking. We are looking for software engineers excited by the opportunity to improve health care in far-reaching ways. In this role, you’ll be responsible for designing, implementing, and maintaining Truveta’s DevOps infrastructure. You’ll ensure that our CI/CD pipelines, cloud environments, and automation systems are secure, scalable, and reliable, enabling our teams to deliver high-quality software efficiently. You will play a key role in our DevOps platform from Azure DevOps to GitHub Enterprise, integrating modern DevSecOps practices and enabling AI-assisted development tools. This is a hands-on role with significant strategic impact across engineering, security, and compliance functions. Responsibilities Platform Administration Manage and maintain Truveta’s Azure DevOps and GitHub Enterprise environments, including repositories, pipelines, and access controls. Ensure the reliability, scalability, and security of CI/CD infrastructure. Migration & Integration Lead the migration effort from one DevOps platform to another collaborating with engineering and security teams. Develop migration strategies, validation procedures, and rollback plans to ensure smooth transitions with minimal disruption. Security, Compliance & Governance Implement and enforce role-based access controls, compliance standards, and DevSecOps best practices. Integrate and manage code scanning and security tools (e.g., Snyk, Wiz, JFrog, SonarQube, GitHub Advanced Security). Partner with compliance teams to maintain audit-ready DevOps processes. Continuous Integration / Continuous Deployment Design, optimize, and automate multi-language CI/CD pipelines with robust build, test, and release workflows. Improve developer velocity and deployment reliability through scalable automation and tooling enhancements. AI-Assisted Coding Enablement Support adoption and governance of AI-driven development tools (e.g., GitHub Copilot, GitHub Coding Agents, Claude Sonnet 4.5). Manage configuration, licensing, and compliance for AI assistants within GitHub Enterprise. Partner with engineering leadership to integrate AI-assisted development practices effectively and safely. Collaboration & Mentorship Collaborate cross-functionally with software engineering, security, and compliance teams to align DevOps capabilities with company goals. Mentor engineers and champion DevOps best practices across the organization. Key Qualifications: Bachelor’s degree in Computer Science, Engineering, or equivalent experience. 5+ years of hands-on experience in DevOps, platform engineering, or infrastructure automation. Proven expertise with Azure DevOps and GitHub Enterprise Cloud administration. Strong experience designing and managing CI/CD pipelines and artifact repositories. Experience with security and compliance automation tools (Snyk, Wiz, JFrog, SonarQube, etc.). Proficiency with Infrastructure-as-Code tools (Terraform, ARM templates, or GitHub Actions workflows). Deep understanding of DevSecOps principles and compliance frameworks. Hands-on experience with AI-assisted coding platforms (e.g., GitHub Copilot, Copilot for Business). Strong scripting skills in PowerShell, Bash, or Python. Excellent problem-solving, communication, and collaboration skills. Demonstrated success in agile environments, including code reviews and pair programming. A strong GitHub or project portfolio showcasing technical depth and automation expertise. Why Truveta? Be a part of building something special. Now is the perfect time to join Truveta. We have strong, established leadership with decades of success. We are well-funded. We are building a culture that prioritizes people and their passions across personal, professional, and everything in between. Join us as we build an amazing company together. We Offer: Interesting and meaningful work for every career stage Great benefits package Comprehensive benefits with strong medical, dental and vision insurance plans 401K plan Professional development & training opportunities for continuous learning Work/life autonomy via flexible work hours and flexible paid time off Generous parental leave Regular team activities (virtual and in-person) The base pay for this position is $125,000 to $140,000. The pay range reflects the minimum and maximum target. Pay is based on several factors including location and may vary depending on job-related knowledge, skills, and experience. Certain roles are eligible for additional compensation such as incentive pay and stock options. If you are based in California, we encourage you to read this important information for California residents linked here. Truveta is committed to creating a diverse, inclusive, and empowering workplace. We believe that having employees, interns, and contractors with diverse backgrounds enables Truveta to better meet our mission and serve patients and health communities around the world. We recognize that opportunities in technology historically excluded and continue to disproportionately exclude Black and Indigenous people, people of color, people from working class backgrounds, people with disabilities, and LGBTQIA+ people. We strongly encourage individuals with these identities to apply even if you don’t meet all of the requirements. Please note that all applicants must be authorized to work in the United States for any employer as we are unable to sponsor work visas or permits (e.g. F-1 OPT, H1-B) at this time. We appreciate your interest in the position and encourage you to explore future opportunities with us.

Posted today

Topaz Labs logo
Topaz LabsDallas, TX
We're Topaz Labs, an AI tech company that builds one-click image and video quality software with deep learning. Over 1M photographers and designers trust us with their work, including teams at Google, Nvidia, NASA, and more. We've processed over 1 billion images, achieved 1000% revenue growth in the last 4 years, and we're just getting started. About us Rocketship growth and opportunity for impact ($3M → $48M revenue in six years). Over 1 million customers (including companies like Apple, NASA, Netflix) have used us for over 1 billion photos. Our tech has been covered by Fast Company, The Verge, Engadget, Mashable, BBC, and more. We're a world-class team that executes quickly, obsesses about the customer experience, promotes from within, and we're profitable with infinite runway. About the role As a DevOps Engineer on our team you would be a key part of development on all Topaz software - including our core products, Photo AI, Video AI and Gigapixel, ML model training infrastructure and distribution, and our website and other internal tooling. You will be working alongside our current devops engineer and helping expand our pipeline to deliver new products and ML models as we grow in 2024. Another chief goal is to increase automated testing and stability of our applications. You will also work closely with our product engineering team to sustain & improve biweekly update deliveries, as well as internal development processes. You will be joining an awesome team that sets a high standard for craftsmanship and feature delivery. About you At least 2 years of professional working experience in a related field. Deep familiarity with C++ build tools (3+ years). Hands-on experience with AWS, Azure, or similar cloud platforms. Experience building and deploying CI/CD pipelines. Experience implementing test automation. Knowledge of networking infrastructure including CDN caching. Preferred Experience building releases for Windows and/or MacOS or iOS. Experience in Python, Go or Javascript. QT development/build experience. Experience with Conan. Experience with test automation tools such as Eggplant. Our compensation packages will correspond to experience, but also performance during the interview process. They include base salary, equity, and profit sharing. Do you meet most but not 100% of the above? We’d still like to hear from you–we are passionate about developing a diverse team and culture, so please apply if you’re interested! This is a unique role for someone interested in making a deep impact at a high-growth tech software company. We offer strong base salary, plus significant ownership that scales with the company's growth. We also offer 100% covered medical/dental/vision for employees, 15 days annual PTO, 5 personal days, plus holidays, and 401k matching. This is a full-time onsite role in Dallas, TX, and we will ask you to relocate if you're not in the area.

Posted 30+ days ago

E logo
E-SpaceSaratoga, CA

$100,000 - $170,000 / year

Ready to make connectivity from space universally accessible, secure, and actionable? Then you’ve come to the right place! At E-Space, we’re focused on bridging Earth and space with the world’s most sustainable low Earth orbit (LEO) satellite network. We’re a team of bold thinkers, ambitious leaders and dynamic doers—and we’re disrupting NewSpace by fundamentally changing the design of legacy LEO space systems to deliver entirely new satellite capabilities at a fraction of the cost. We’re intentional, we’re unapologetically curious and we’re 100% committed—to saving space, to protecting our planet and to turning connectivity into actionable intelligence. What you will be doing: Design, deploy, and maintain highly-scalable, highly-available software systems in AWS Architect and manage containerized applications on Amazon EKS with focus on reliability and performance Build and maintain Infrastructure as Code using Terraform for AWS cloud resources Develop and optimize CI/CD pipelines for automated testing, deployment, and rollback capabilities Implement comprehensive monitoring, alerting, and observability solutions using CloudWatch, Prometheus, and Grafana Ensure system reliability through SLI/SLO definition, error budgets, and incident response procedures Collaborate directly with engineering teams to optimize application deployment and operations Manage deployments and scaling strategies to support mission-critical operations Automate and enforce cloud security, governance, and compliance controls Participate in on-call rotation and lead incident response for production level systems What you bring to this role: 5+ years of experience in SRE, DevOps, or Platform Engineering roles Proven experience designing and operating mission-critical, highly-available systems within AWS Advanced proficiency in Infrastructure as Code using Terraform (OpenTofu) Deep experience with Kubernetes, EKS, Helm, and container orchestration Strong CI/CD pipeline development and management experience (Bitbucket preferred) Proficiency in Python and Bash scripting for automation Experience with monitoring and observability tools (Prometheus, Grafana, ELK Stack) Knowledge of capacity planning and performance optimization Experience with database operations and scaling (RDS, Aurora, or similar) Extra bonus points for the following: AWS Solutions Architect Professional, Certified Kubernetes Administrator (CKA), or equivalent expertise Experience with incident management and post-mortem processes Experience with GitOps workflows and tools (ArgoCD, Flux) Knowledge of service mesh technologies (Istio, Linkerd) Experience with chaos engineering and disaster recovery planning Experience with Zero Trust Networking (ZTNA) or VPN solutions Background in aerospace, defense, or other mission-critical industries Strong intellectual curiosity and commitment to continuous learning Exceptional attention to detail and an ownership mentality The estimated range is meant to reflect an anticipated salary range for the position in question, which is based on market data and other factors, all of which are subject to change. Individual pay is based on location, skills and expertise, depth of relevant experience, and other relevant factors. For questions about this, please speak to the recruiter if you decide to apply for the role and are selected for an interview . This is a full time, exempt position, based out of our Saratoga office. The target base pay for this position is $100,000 - $170,000 annually. The total compensation packaged will be determined by various factors such as your relevant job-related knowledge, skills, and experience. We are redefining how satellites are designed, manufactured and used—so we’re looking for candidates with passion, deep knowledge and direct experience on LEO satellite component development, design and in-orbit activities. If that’s your experience – then we’ll be immediately wow-ed. E-Space is not currently able to provide employment sponsorship for candidates who do not hold work authorization for the location of this role.

Posted 30+ days ago

D logo

Senior DevOps Engineer

DomynNew York, NY

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

We are looking for a talented DevOps Engineer to join our US team and help power the next generation of enterprise AI. 

As our company delivers full-stack AI applications to some of the world’s leading financial institutions, your work will be critical in ensuring these solutions run flawlessly in the most demanding environments.

In this role, you’ll provide technical leadership and hands-on expertise to keep our infrastructure secure, reliable, and high-performing. You’ll be responsible for optimizing deployments, identifying improvements, creating and executing upgrade strategies. Proven ability to deliver solutions that meet enterprise-grade requirements will be essential.

You’ll collaborate closely with our engineering, product, and customer teams to streamline continuous delivery of software and AI applications. As we scale deployments across Google Cloud Platform, Microsoft Azure, Amazon Web Services, and on-premises environments, you’ll play a central role in maintaining, optimizing, and supporting the systems that make it all possible.

This is an opportunity to work at the forefront of AI adoption in financial services, solving complex challenges and shaping the infrastructure behind innovative enterprise solutions.

Responsibilities:

  • Implement and manage DevOps tools, processes, and infrastructure.
  • Collaborate with development and operations teams to design and implement continuous integration and deployment pipelines,
  • Interact and collaborate with clients, to provide technical support and guidance to ensure their satisfaction
  • Automate infrastructure provisioning, configuration, and monitoring.
  • Ensure high availability and performance of our software systems.
  • Identify bottlenecks and implement solutions to improve system performance.
  • Monitor and troubleshoot production issues and provide timely resolution
  • Maintain and enhance system security and data protection measures.
  • Stay up to date with industry best practices and emerging technologies for DevOps.

Requirements

What You Have

  • At least 6 years of experience as Senior DevOps Engineer or a similar role
  • Excellent knowledge of Cloud Services (GCP/Azure/AWS)
  • Experience deploying to on-prem hardware. 
  • Experience with containerization technologies such as Docker and Kubernetes.
  • Experience with version control systems such as Git.
  • Strong skills in defining and implementing network architecture, IAM policies, HA setups and backups.
  • Expertise with Kubernetes and in any Continuous Delivery platform
  • Proficiency in Linux and Bash
  • Strong experience in setup and management of cloud architectures through Infrastructure As Code (Terraform)
  • Experience with Postgres DB and monitoring systems (Datadog, Elastic Cloud, Dynatrace)
  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration abilities.
  • Ability to multitask and prioritize tasks in a fast-paced environment.
  • Fluency in English 

What Would Be Nice To Have

  • Good knowledge of programming languages as Java, Python, Javascript
  • GCP/Azure/AWS Certifications
  • Previous experience in financial services sector 

Who You Are

  • Passionate about the digital world
  • Problem solver
  • Collaborative mindset
  • Versatile with new domains and latest technology buzzword

Benefits

Perks

Domyn offers a competitive compensation structure, including salary, performance-based bonuses, and additional components based on experience. All roles include comprehensive benefits as part of the total compensation package.

About Domyn

Domyn is a company specializing in the research and development of Responsible AI for regulated industries, including financial services, government, and heavy industry. It supports enterprises with proprietary, fully governable solutions based on a composable AI architecture — including LLMs, AI agents, and one of the world’s largest supercomputers.

At the core of Domyn’s product offer is a chip-to-frontend architecture that allows organizations to control the entire AI stack — from hardware to application — ensuring isolation, security, and governance throughout the AI lifecycle.

Its foundational LLMs, Domyn Large and Domyn Small, are designed for advanced reasoning and optimized to understand each business’s specific language, logic, and context. Provided under an open-enterprise license, these models can be fully transferred and owned by clients.

Once deployed, they enable customizable agents that operate on proprietary data to solve complex, domain-specific problems. All solutions are managed via a unified platform with native tools for access management, traceability, and security.

Powering it all, Colosseum — a supercomputer in development using NVIDIA Grace Blackwell Superchips — will train next-gen models exceeding 1T parameters. 

Domyn partners with Microsoft, NVIDIA, and G42. Clients include Allianz, Intesa Sanpaolo, and Fincantieri.

Please review our Privacy Policy here https://bit.ly/2XAy1gj

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall