landing_page-logo
  1. Home
  2. »All Job Categories
  3. »Devops Jobs

Auto-apply to these devops jobs

We've scanned millions of jobs. Simply select your favorites, and we can fill out the applications for you.

The Swift Group logo
The Swift GroupAugusta, Georgia
Locations: Hanover, MD; Columbia, MD; Augusta, GA; San Antonio, TX The Swift Group is a privately held, mission-driven and employee-focused services and solutions company headquartered in Reston, VA. Our capabilities include Software Development, Engineering & IT, Data Science, Cyber Enablement, Logistics, and Training. Founded in 2019, Swift supports Civilian, Defense, and Intelligence Community customers across the country and around the globe. We are looking for a DevOps Engineer to join our high-performing team on a dynamic program. In this role, the DevOps Engineer will work hands-on with cloud infrastructure, containerized platforms, and automation frameworks to enable scalable, secure, and efficient delivery of mission-critical systems. This position requires expertise in Kubernetes, AWS, automation tools and scripting, with the opportunity to support cutting-edge Big Data platforms. You will help architect, deploy, and maintain infrastructure for both cloud and on-premise systems, while championing reliability, repeatability, and performance. Responsibilities: Design, implement, and maintain cloud-native and hybrid infrastructure solutions in AWS, with a focus on Elastic Kubernetes Service (EKS) Deploy and manage containerized applications using Kubernetes across development and production environments Automate infrastructure provisioning and application deployment using Infrastructure as Code (IaC) tools such as Terraform and Ansible Monitor and support Linux-based systems for availability and performance Collaborate with development teams to integrate CI/CD pipelines and improve release cycles using tools like Git, Flux, and Jenkins Continuously improve processes and tools to ensure high availability and performance Support deployment of portable edge-computing infrastructure to non-cloud (on-premise) environments Diagnose and resolve system issues across the stack, including networking, containers, and cloud services Participate in Agile development activities and contribute to continuous process improvement initiatives Stay current with emerging DevOps technologies, container orchestration strategies, and cloud capabilities Requirements: Bachelor’s degree in Computer Science or related field Mid-level candidates: 3-8 years of relevant experience Senior-level candidates: 9-13 years of relevant experience SME-level candidates: 14+ years of relevant experience Hands-on experience with AWS services such as EKS, EC2, EBS, S3, Lambda Proficient in Linux systems administration and troubleshooting Proficient in scripting/programming abilities in Python, Bash, or Go Knowledge of containerization technologies (Docker) and orchestration platforms (Kubernetes) Experience with Terraform, Ansible, and other IaC or configuration management tools Familiarity with Git, Flux, and other development and deployment tools Demonstrated ability to work in Agile environments and use tools such as JIRA and Confluence Must be able to obtain Security+ certification within 60 days of hire Must be available to work on-site 4-5 days per week; flexibility is required to align with customer needs US citizenship and an active Secret clearance; will also accept TS/SCI or TS/SCI with CI Polygraph Desired Experience: Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Work could possibly require some on-call work. #LI-DI1 #Onsite The Swift Group and Subsidiaries are an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status, or any other protected class. Pay Range: $49,996.80 - $290,004.00 Pay ranges are a general guideline and not intended as a guaranteed and/or implied final compensation or salary for this job opening. Determination of official compensation or salary relies on several different factors including, but not limited to: level of position, complexity of job responsibilities, geographic location, work experience, education, certifications, Federal Government contract labor categories, and contract wage rates. At The Swift Group and Subsidiaries, you will receive comprehensive benefits including but not limited to: healthcare, wellness, financial, retirement, education, and time off benefits.

Posted 30+ days ago

ClinDCast logo
ClinDCastIrving, Alabama
Job Title: Azure DevOps Lead Technical Consultant Location: Frisco, TX (Local candidates preferred) Type: Contract Job Description: HCL is seeking an Azure DevOps Lead Technical Consultant with strong hands-on experience in Azure DevOps, CI/CD, and AEM Cloud . The role involves managing and deploying AEM applications, Kubernetes deployments (via GitLab CI/CD & HELM), API Gateway/security, release management, troubleshooting, and Docker containerization. Experience with Splunk (dashboards, queries, alerts) and OpenTelemetry is required. Requirements: Proficiency in Azure DevOps & CI/CD Experience with AEM Cloud, GitLab CI/CD, Kubernetes, HELM, Docker API Gateway/security, configuration & secrets management Release management & troubleshooting skills Splunk dashboards/queries/alerts, OpenTelemetry Certification: Azure DevOps Engineer Expert (preferred) Highly Preferred: Ex-T-Mobile experience If interested, please share your updated resume. Compensation: $60.00 - $65.00 per hour Empowering the Future of Healthcare The healthcare Industry is on the brink of a paradigm shift where patients are increasingly being viewed as empowered consumers, utilizing digital technologies to better understand and manage their own health. As a result, there is a growing demand for a range of patient-centric services, including personalized care that is tailored to each individual's unique needs, health equity that ensures access to care for all, price transparency to make healthcare more affordable, streamlined prior authorizations for medications, the availability of therapeutic alternatives, health literacy to promote informed decision-making, reduced costs, and many other initiatives designed to improve the patient experience. ClinDCast is at the forefront of shaping the future of healthcare by partnering with globally recognized healthcare organizations and offering them innovative solutions and expert guidance. Our suite of services is designed to cater to a broad range of needs of healthcare organizations, including healthcare IT innovation, electronic health record (EHR) implementation & optimizations, data conversion, regulatory and quality reporting, enterprise data analytics, FHIR interoperability strategy, payer-to-payer data exchange, and application programming interface (API) strategy.

Posted 30+ days ago

U logo
Unisys CorporationRichmond, Virginia
What success looks like in this role: DevSecOps Pipeline Design & Automation: Design and implement secure, automated CI/CD pipelines in AWS using tools like AWS CodePipeline, Jenkins, GitLab CI, and other DevOps tools while ensuring security is built into every phase of development, from code to production. Cloud Infrastructure Security: Architect, configure, and maintain secure AWS infrastructure using best practices in identity and access management (IAM), networking, encryption, and more, with a focus on risk mitigation and compliance. Security Integration: Integrate security tools and practices into the DevOps lifecycle, including code scanning, vulnerability assessments, compliance checks, and automated security testing. Security Monitoring & Incident Response: Continuously monitor AWS environments for security vulnerabilities and performance issues. Implement proactive monitoring tools (e.g., AWS CloudTrail, GuardDuty, AWS Security Hub) and lead incident response efforts to mitigate threats. Automation & Infrastructure as Code (IaC): Leverage tools like Terraform, AWS CloudFormation, and the AWS CLI to automate the deployment and management of secure infrastructure. Risk Management & Compliance: Ensure that AWS-based applications and systems adhere to industry standards and compliance frameworks (e.g., SOC 2, GDPR, PCI-DSS) by implementing and maintaining security controls and audits. Collaboration & Mentoring: Work closely with development, security, and operations teams to ensure seamless integration of security into the DevOps pipeline. Mentor and guide junior engineers on best practices for security in DevOps environments. Continuous Improvement: Stay up-to-date with the latest trends, tools, and best practices in DevSecOps, AWS, and cloud security. Proactively recommend improvements to systems and processes for better security posture, performance, and cost-efficiency. Documentation & Reporting: Maintain detailed documentation for DevSecOps processes, including security configurations, vulnerability reports, and incident responses. You will be successful in this role if you have: Experience: 2-3 years of hands-on experience in DevOps, with at least 1 year focusing on AWS cloud environments and security integration in the DevOps lifecycle. AWS Services Expertise: Strong knowledge of core AWS services, including EC2, VPC, IAM, Lambda, S3, RDS, and CloudWatch. Experience with security-focused AWS services such as AWS Security Hub, GuardDuty, and KMS is required. DevOps Tools: Proficiency in DevOps tools for CI/CD pipelines, including AWS CodePipeline, Jenkins, GitLab CI, or similar. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes, Amazon EKS). Security Tools & Practices: Experience with automated security tools such as Snyk, Checkmarx, SonarQube, or others for static and dynamic code analysis, as well as infrastructure scanning tools like AWS Config and Prisma Cloud. Infrastructure as Code (IaC): Hands-on experience using Terraform, AWS CloudFormation, or similar IaC tools to automate secure cloud infrastructure. Compliance & Risk Management: Expertise in implementing security controls, vulnerability management, and compliance frameworks (SOC 2, ISO 27001, GDPR, PCI-DSS, etc.) in cloud environments. Security Architecture & Practices: Strong understanding of security best practices including encryption (at rest and in transit), identity and access management (IAM), network segmentation, and secure coding practices. Scripting & Automation: Proficiency in scripting languages (Python, Bash, or PowerShell) for automation of security and operational tasks. Monitoring & Logging: Experience with AWS security and monitoring tools like CloudWatch, CloudTrail, GuardDuty, and AWS Config, as well as third-party monitoring solutions. Incident Response & Forensics: Ability to respond to and investigate security incidents, conduct root cause analysis, and implement preventive measures. Certifications: AWS Certified DevOps Engineer – Professional, AWS Certified Security Specialty, or similar certifications are highly preferred. Communication Skills: Strong written and verbal communication skills, with the ability to explain security concepts to non-technical stakeholders. Preferred Qualifications: Familiarity with container security and microservices architectures. Knowledge of serverless security practices (AWS Lambda, API Gateway, etc.). Experience with multi-cloud or hybrid cloud environments. Familiarity with compliance auditing tools like AWS Audit Manager. Exposure to security testing frameworks such as OWASP, SANS, or NIST. Benefit Highlights: Unisys offers an outstanding benefits package, featuring unlimited paid time off, a 401(k) match, comprehensive healthcare, HSA matching, ongoing learning opportunities, and more! We’re committed to supporting work-life balance and investing in your future success. Video Interview Notice: At Unisys, we incorporate video interviews as a key part of our hiring process. This allows us to get to know you better and provide a more engaging and convenient interview experience. We appreciate your understanding and look forward to connecting with you virtually! #LI-JV1 This role may require access to export-controlled commodities and technology. Therefore, to conform to U.S. export control regulations, applicant should be eligible for any required authorizations from the U.S. Government. Unisys is proud to be an equal opportunity employer that considers all qualified applicants without regard to age, caste, citizenship, color, disability, family medical history, family status, ethnicity, gender, gender expression, gender identity, genetic information, marital status, national origin, parental status, pregnancy, race, religion, sex, sexual orientation, transgender status, veteran status or any other category protected by law. This commitment includes our efforts to provide for all those who seek to express interest in employment the opportunity to participate without barriers. If you are a US job seeker unable to review the job opportunities herein, or cannot otherwise complete your expression of interest, without additional assistance and would like to discuss a request for reasonable accommodation, please contact our Global Recruiting organization at GlobalRecruiting@unisys.com or alternatively Toll Free: 888-560-1782 (Prompt 4). US job seekers can find more information about Unisys’ EEO commitment here .

Posted 1 week ago

M logo
MythicAustin, Texas
We’re hiring an experienced System Administrator to play a key role in developing and maintaining our IT infrastructure used across the whole company. About Us: Mythic is building the future of AI computing with breakthrough analog technology that delivers 100× the performance of traditional digital systems at the same power and cost. This unlocks bigger, more capable models and faster, more responsive applications - whether in edge devices like drones, robotics, and sensors, or in cloud and data center environments. Our technology powers everything from large language models and CNNs to advanced signal processing, and is engineered to operate from –40 °C to +125 °C, making it ideal for industrial, automotive, aerospace, and defense. We’ve raised over $100M from world-class investors including Softbank, Threshold Ventures, Lux Capital, and DCVC, and secured multi-million-dollar customer contracts across multiple markets. The salary range for this position is $120,000–$225,000+ annually. Actual compensation depends on experience, skills, qualifications, and location. System Administration at Mythic: At Mythic, our IT and System Administrator team is at the heart of keeping our engineering and business operations running smoothly. We manage everything from Linux servers and high-performance compute clusters to SaaS applications and employee workstations. Our team supports cutting-edge development by maintaining critical infrastructure like Kubernetes, Nomad, and TeamCity, while also ensuring that day-to-day tools like Google Workspace, Zoom, and Slack work seamlessly for every employee. We’re problem-solvers, builders, and enablers that provide the reliable backbone that allows Mythic’s teams to focus on innovation. Responsibilities Administer and maintain Linux servers (Ubuntu, RHEL, Rocky Linux) and package management (apt, yum). Manage network infrastructure (firewalls, switches, VLANs) and VPN solutions (OpenVPN). Support video conferencing (Zoom, Google Meet). Administer SaaS applications (Google Workspace, Slack, Atlassian, Okta). Provision and configure new Linux, Windows, and macOS laptops and workstations. Provide IT support for employees across hardware, software, and connectivity issues. Maintain and support job distribution and orchestration platforms such as TeamCity, Nomad, and Kubernetes. Requirements 3+ years of hands-on experience with Linux system administration. Knowledge of NFS, SMB, or other network filesystems. Experience with job distribution systems (e.g., LSF, Slurm, Nomad). Familiarity with CI/CD platforms such as TeamCity or Jenkins. Experience with containerization and orchestration (Docker, Kubernetes). Familiarity with EDA tools (e.g., Cadence, Synopsys, Mentor Graphics). Strong scripting skills in Bash, Python, or Ruby for automation. Proven debugging and problem-solving ability with minimal supervision. Ability to rapidly learn and support new tools/software. Experience with version control systems and backup solutions. Strong attention to detail and thorough documentation practices. Excellent communication and collaboration skills in a fast-paced environment. At Mythic, we foster a collaborative and respectful environment where people can do their best work. We hire smart, capable individuals, provide the tools and support they need, and trust them to deliver. Our team brings a wide range of experiences and perspectives, which we see as a strength in solving hard problems together. We value professionalism, creativity, and integrity, and strive to make Mythic a place where every employee feels they belong and can contribute meaningfully.

Posted 1 day ago

SchoolStatus logo
SchoolStatusPhiladelphia, Pennsylvania
We're looking for a Staff DevOps Engineer with deep AWS expertise to lead the design, automation, and security of our cloud infrastructure. You’ll drive cross-account AWS management, Infrastructure as Code (Terraform), container orchestration, and CI/CD pipelines, while ensuring performance, scalability, and compliance across environments. This is a high-impact role where you’ll collaborate cross-functionally, mentor engineers, and shape the future of our platform. The impact you'll have: Design and manage multi-account AWS infrastructure (EC2, S3, RDS, ECS, IAM, CloudFormation). Automate provisioning and operations using Terraform and Python. Lead containerized deployments with Docker, ECS, and ECR. Build and maintain CI/CD pipelines using AWS CodeBuild and GitHub Actions. Manage identity and access (IAM policies, SSO with OneLogin), certificate lifecycles, and security tools like GuardDuty and SecurityHub. Oversee storage and database systems (PostgreSQL, RDS, S3) with backup and recovery strategies. Configure DNS (Route 53), load balancers, and cloud monitoring (CloudWatch, Pingdom). Collaborate across teams to support development velocity while enforcing infrastructure and security standards. Bonus Experience: Leading or mentoring engineers and helping set infrastructure direction. Exposure to AI/ML infrastructure, including LLMs, RAG pipelines, or vector databases. Experience with multi-cloud or SaaS integrations (Azure, GCP, Jira, Slack, etc.). What you'll bring: 7+ years in DevOps, SRE, or cloud engineering roles. Deep experience with AWS and Infrastructure as Code (Terraform). Practical knowledge of CI/CD, containerization, and cloud security. Strong Python scripting and GitHub workflow management. A mindset for automation, security, and scale. U.S. Pay Range $130,000 — $160,000 USD Please note that the compensation information is a good faith estimate, and is provided pursuant to Equal Pay Laws. SchoolStatus intends to offer the selected candidate base pay dependent on job-related, non-discriminatory factors, such as experience. Our team will provide more information about the total compensation package for this position during the interview process. What we do: SchoolStatus is more than just an EdTech company—we're reshaping the future of K-12 education. Our fast-growing teams are dedicated to transforming education through innovative communications, attendance management, and teacher development solutions for schools, districts, and families. We deeply value diversity and are dedicated to fostering an inclusive environment for all our employees. We believe that exceptional candidates bring unique perspectives and skills that enable us to best meet our mission of supporting student success. If you believe you have the potential and passion for a SchoolStatus role, we encourage you to apply—and join us to make a meaningful impact on the future of education!

Posted 2 weeks ago

Leyden Solutions logo
Leyden SolutionsNewington, Virginia
Benefits: 401(k) 401(k) matching Bonus based on performance Competitive salary Health insurance Paid time off Vision insurance The Senior-Level DevOps Engineer is a highly skilled professional responsible for designing, implementing, and maintaining Continuous Integration and Continuous Delivery (CI/CD), and infrastructure automation solutions. With extensive experience in Agile environments, they play a crucial role in driving DevOps practices, facilitating collaboration between development and operations teams, and enabling the rapid and reliable delivery of software products. This role requires advanced technical expertise, leadership abilities, and a deep understanding of Agile principles to drive successful DevOps initiatives. Senior-Level DevOps Engineer services include: Infrastructure Automation: Design, implement, and manage infrastructure as code (IaC) solutions using tools such as Terraform, Ansible, or CloudFormation to automate provisioning, configuration, and management of cloud and on-premises infrastructure. Continuous Integration and Delivery (CI/CD): Implement and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI /CD, or CircleCI to automate build, test, and deployment processes, enabling rapid and reliable software delivery. Containerization and Orchestration: Implement containerization solutions using Docker and container orchestration platforms such as Kubernetes or Amazon ECS to streamline application deployment, scaling, and management. Monitoring and Logging: Implement monitoring and logging solutions using tools such as Prometheus, Grafana, ELK Stack, or Datadog to monitor system performance, detect issues, and troubleshoot problems proactively. Security and Compliance: Implement security best practices and compliance standards within DevOps processes and infrastructure, ensuring the security and integrity of software products and environments. Agile Collaboration: Participate in Agile ceremonies such as sprint planning, daily stand-ups, and sprint reviews, collaborating with Agile teams to prioritize DevOps tasks, estimate effort, and provide regular updates on progress. Technical Leadership: Provide technical leadership and mentorship to junior DevOps engineers, guiding them in DevOps practices, tools, and methodologies. Automation and Scripting: Develop automation scripts and tools to streamline DevOps processes, improve efficiency, and reduce manual intervention. Cloud Native Architecture: Design and implement cloud-native architectures using services and technologies provided by major cloud providers such as AWS, Azure, or Google Cloud Platform (GCP). Infrastructure as Code Best Practices: Implement best practices for infrastructure as code, including version control, testing, and code reviews, to ensure consistency, reliability, and scalability of infrastructure automation solutions. Requirements: Minimum of eight (8) years of professional experience in DevOps, system administration, or software engineering roles, with significant experience in designing and implementing DevOps solutions. Relevant certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, or similar certifications demonstrating proficiency in DevOps practices and cloud technologies are highly desirable. Minimum of three (3) years of experience working in Agile environments, preferably in roles involving collaboration within Agile teams. Our Story Our Mission is to capture innovative thought, energy, and the talents of our team members, delivering intelligence services to America’s warfighters, civilians, and national security infrastructure. Leyden was created to accelerate the mission and business goals of our customers through unmatched expertise and a commitment to measurable results. ​ A Leyden jar is an antique electrical component which stores high-voltage electric charges. A precursor to the modern battery, Leyden jars were the first capacitors, As our namesake, we endeavor to capture and provide unique professionals offering creative solutions to corporate and national security challenges.

Posted 30+ days ago

S logo
Southwest Business CorporationSan Antonio, Texas
SWBC is seeking a talented individual to design, implement, and support automation relating to our public and private cloud software deployments and the automation of Configuration Management of all Development, Quality Assurance, User Acceptance, and Production servers. This key role is responsible for improving our efficiency and the scalability of both software and infrastructure cloud deployments leveraging modern toolsets and programming. In addition, performs systems tests, automation and initiation of software builds, and code deployments, version control management, and change management associated with company systems to improve the quality of products. Why you'll love this role: In this role, you will work amongst Cloud, information security, technology, and business professionals in the financial services industry within an enterprise level environment. As the DevOps Engineer, you will be involved in designs decisions, implementations, coding (Node.JS or .NET Core(C#), build and support CI/CD and automation relating to our AWS cloud and private cloud software deployments and the automation of Configuration Management of all Development, Quality Assurance, User Acceptance, and Production servers. Bring y our skills involving VPC, EC2, Route53, IAM, Lambda and other AWS concepts and deploy releases to higher environments (DEV/QA/Prod) through automation and work closely and collaboratively in an Agile environment with engineers. SWBC offers amazing career advancement opportunities, leverages amazing technology and automation and celebrates our success as a team. IT leadership recognize that empowerment, autonomy, work-life balance, professional development, continuous improvement, and a commitment to shared values are key enablers of our success. Essential duties include the following: Builds engineering automation tools using DevOps principles to help streamline and scale applications into a production environment. Develops, analyzes, and maintains scripts, tools, hardware, and systems that support and automate processes for software releases. Leverages scripting (Python, BASH, PowerShell etc.) to build tools. Creates automated, highly scalable, and repeatable processes. Proactively investigates, recommends, and develops enhancements to improve development and operational processes. Actively participates in release planning process to make sure everyone is thinking through the technical release processes. Deploys releases to higher environments (DEV/QA/Prod) through automation. Works closely and collaboratively in an Agile environment with engineers and product teams to analyze issues and find new insights covering our business and operations. Documents results of work and prepares status reports to include successes and failures for all systems for submission to division and corporate management. Maintains end-to-end security ensuring best practices are implemented. Troubleshoots and resolves application development, deployments and operational issues. Performs all other duties as assigned. Serious candidates will possess the minimum qualifications: Bachelor’s degree in Information Technology or related field of study from an accredited four-year college or university required. Minimum of five (5) years of experience in managing AWS resources and automating CI/CD pipelines required. AWS Certified Developer Associate and/or AWS SysOps Administrator Associate required. AWS Solutions Architect Associate Certification required. Experience in Release Engineering, Source Code Management, or related experience in an automated build, test and deploy environment a plus. Experience with Azure or Google Cloud a plus. Experience automating Blue/Green, Canary or other zero down time deployments. Experience with microservices and/or event-driven architecture. Experience using containerization technologies (Docker, Kubernetes, Mesos or Vagrant). Virtualization experience (VMware, Hyper-V, KVM) a plus. Working knowledge of the Software Development Life Cycle. Working knowledge of QA and test methodologies. Knowledge of secure SDLC, OWASP top 10 and CWE/SANS top 25. Knowledge of infrastructure tools: Cloud Formation, Terraform. Advanced understanding of VPC, EC2, Route53, IAM, Lambda and other AWS concepts. Understanding of DNS, NFS, TCP/IP and other protocols. Proficiency Node.JS or .NET Core(C#). Strong scripting skills in PowerShell, Python or Bash. NoSQL exposure [Cassandra, MongoDB, DynamoDB, etc.]. Strong practical Windows and Linux system administration skills in the cloud. RDBMS exposure [MySQL, SQL Server, Aurora, etc.] a plus. Able to build and administer CI/CD. Able to sit for long periods of time performing sedentary activities. SWBC offers*: Competitive overall compensation package Work/Life balance Employee engagement activities and recognition awards Years of Service awards Career enhancement and growth opportunities Leadership Academy and Mentor Program Continuing education and career certifications Variety of healthcare coverage options Traditional and Roth 401(k) retirement plans Lucrative Wellness Program *Based upon employee eligibility Additional Information: SWBC is a Substance-Free Workplace and requires pre-employment drug testing. Please note, SWBC does not hire tobacco users as allowed by law. To learn more about SWBC, visit our website at www.SWBC.com. If interested, please click the appropriate apply button.

Posted 1 week ago

L logo
LinkAnnapolis Junction, Maryland
Sourcing for an SE2 resource for the Integration Team. SE1 level will also be considered, but must have the strong Kubernetes background (3+ years). • Strong (3+ years) Kubernetes installation and experience (Rancher/RKE/K3S based is preferred) • Experience with large OpenSearch installations • Experience troubleshooting distributed applications • Basic NiFi administration • Experience writing/using Salt for configuration management • Experience writing/maintaining Helm charts • Basic git experience • Influx/Prometheus/Grafana experience. • Experience with KKafka distributed event store and stream processing platform • Ceph storage platform

Posted 30+ days ago

Parsons logo
ParsonsCentreville, Virginia
In a world of possibilities, pursue one with endless opportunities. Imagine Next!When it comes to what you want in your career, if you can imagine it, you can do it at Parsons. Imagine a career working with exceptional people sharing a common quest. Imagine a workplace where you can be yourself. Where you can thrive. Where you can find your next, right now. We’ve got what you’re looking for. Job Description: Parsons is seeking a Senior TS/SCI DevOps Engineer - Flexible scheduling, mission critical, cutting edge technology!! This position will provide support design and development of a next generation customer operations platform. You will be responsible for the automation and continuous integration of software products as they are built and deployed to operational endpoints. This is your opportunity to be a part of a vital mission that directly contributes to matters of National security! Responsibilities Design, develop, and maintain DevOps framework environments to support the automation and deployment of operational capabilities Develop both customer and team documentation of the DevOps process Perform maintenance, upgrades, and troubleshooting of existing systems Provide technical support and training to project team members and customers Required Skills: Experience with distributed architectures including both Linux (RHEL, Rocky/Centos, Ubuntu, etc.) and Windows Server Operating Systems Experience with build automation technologies (Jenkins, Gradle, Maven, Ivy, etc.) Linux scripting (Bash, Python, Gawk, Perl, Groovy, etc.) Bachelor of Science degree in Computer Science, Information Systems Management, or a related discipline or comparable work experience 5+ years of experience in a related field that may include DevOps responsibilities TS/SCI Government clearance Security Clearance Requirement: An active Top Secret SCI security clearance is required for this position.​This position is part of our Federal Solutions team.The Federal Solutions segment delivers resources to our US government customers that ensure the success of missions around the globe. Our intelligent employees drive the state of the art as they provide services and solutions in the areas of defense, security, intelligence, infrastructure, and environmental. We promote a culture of excellence and close-knit teams that take pride in delivering, protecting, and sustaining our nation's most critical assets, from Earth to cyberspace. Throughout the company, our people are anticipating what’s next to deliver the solutions our customers need now.Salary Range: $120,800.00 - $217,400.00We value our employees and want our employees to take care of their overall wellbeing, which is why we offer best-in-class benefits such as medical, dental, vision, paid time off, 401(k), life insurance, flexible work schedules, and holidays to fit your busy lifestyle!Parsons is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, veteran status or any other protected status.We truly invest and care about our employee’s wellbeing and provide endless growth opportunities as the sky is the limit, so aim for the stars! Imagine next and join the Parsons quest—APPLY TODAY! Parsons is aware of fraudulent recruitment practices. To learn more about recruitment fraud and how to report it, please refer to https://www.parsons.com/fraudulent-recruitment/ .

Posted 1 day ago

TEGNA logo
TEGNAKnoxville, Tennessee
About TEGNA TEGNA Inc. (NYSE: TGNA) helps people thrive in their local communities by providing the trusted local news and services that matter most. With 64 television stations in 51 U.S. markets, TEGNA reaches more than 100 million people monthly across the web, mobile apps, streaming, and linear television. Together, we are building a sustainable future for local news. Tegna is seeking a skilled and forward-thinking DevOps Engineer to join our growing technology team. As we evolve into a modern, cloud-native, and microservices-driven engineering organization, DevOps plays a critical role in enabling velocity, reliability, and scale. In this role, you’ll collaborate across development, QA, and platform teams to streamline CI/CD processes, infrastructure management, observability, and developer experience. This position is ideal for someone who thrives on automation, performance, and continuous improvement — and who wants to help shape the future of how media technology is delivered. Responsibilities: Design, implement, and maintain scalable CI/CD pipelines using GitHub Actions and Azure DevOps Support and evolve infrastructure-as-code practices using Terraform Optimize cloud resource management and orchestration across AWS and Azure environments Enable observability and monitoring through tools like New Relic and custom dashboards Collaborate closely with engineering teams to integrate CI/CD, monitoring, and security into development workflows Ensure high availability, performance, and scalability of our cloud-native and microservice-based architecture Automate routine tasks to reduce manual effort and increase reliability in development, testing, and deployment Provide guidance and best practices on containerization (Docker) and orchestration (Kubernetes) Our Platform Environment: Cloud: AWS & Azure (cloud-native, PaaS and serverless-first) Infrastructure as Code: Terraform CI/CD: GitHub Actions, Azure DevOps Observability: New Relic, custom metrics, logging pipelines Containers: Docker, Kubernetes Requirements: 3+ years in DevOps, Platform Engineering, or related roles Experience with CI/CD automation using GitHub Actions and/or Azure DevOps Proficient in infrastructure-as-code tools (e.g., Terraform) Experience with Docker and Kubernetes in production environments Familiar with logging, metrics, and distributed tracing platforms (e.g., New Relic, CloudWatch) Hands-on experience managing cloud-native environments (AWS and/or Azure) Knowledge of secure DevOps practices and identity/access management Comfortable working in cross-functional teams and supporting developers directly Passion for automation, developer productivity, and continuous delivery Qualities for Success: Highly collaborative and service-oriented Curious and continuous learner Strong attention to detail and operational excellence Thrives in fast-paced, hybrid working environments Aligned with Tegna’s principles: outcome-focused, ownership-driven, and improvement-minded Why TEGNA? At TEGNA, we’re not just building systems, we’re redefining the media industry through technology. You'll join a team committed to transparency, shared understanding, and building high-impact platforms with speed and stability. As a DevOps Engineer, you’ll be at the heart of enabling our engineering teams to deliver at scale. Join us to shape the infrastructure behind great digital experiences and drive meaningful impact for millions of users! Benefits: TEGNA offers comprehensive benefits designed to safeguard the physical, mental and financial health of our employees and their families. TEGNA offers two medical plan options for full and part-time employees through Blue Cross Blue Shield of Texas, as well as access to dental and eye care coverage; fertility, surrogacy and adoption assistance; disability and life insurance. Our 401(k) program offers full, part-time and temporary employees the opportunity to contribute 1% - 80% of their pay on a pre-tax basis to TEGNA’s 401(k). Contributions made up to the first 4% of pay are eligible for a 100% match from the company and are 100% vested from day one. Regardless of participation in TEGNA medical plans, ALL employees and their eligible family members receive nine free virtual doctor’s appointments with a physician through Teladoc, and 12 free annual therapy sessions with a licensed clinician through Spring Health. TEGNA offers a generous Paid Time Off (PTO) benefit as well as nine paid holidays per year. * Some jobs are covered by a collective bargaining agreement and thus some or all of the benefits described herein may not apply. For example, some newsroom bargaining unit employees receive health and retirement benefits under plans administered by the union. EEO statement : TEGNA Inc. is a proud equal opportunity employer. We are proud to be an equal opportunity employer, hiring and developing individuals from diverse backgrounds and experiences to add to our collaborative culture. We value and consider applications from all qualified candidates without regard to actual or perceived race, color, religion, national origin, sex, gender, age, marital status, personal appearance, sexual orientation, gender identity or expression, family responsibilities, disability, medical condition, enrollment in college or vocational school, political affiliation, military or veteran status, citizenship status, genetic information, or any other basis protected by federal, state or local law. TEGNA will reasonably accommodate qualified individuals with disabilities in accordance with applicable law. If you are in need of an accommodation in order to submit your application, please email askhr@tegna.com Recruiting Fraud Alert: To all candidates: your personal information and online safety are important to us. Only TEGNA Recruiters or Hiring Managers will reach out to you regarding consideration of your application or background. Communications with TEGNA employees will either come from a TEGNA email address with a domain of tegna.com or one of our affiliate station domains. Recruiters or Hiring Managers will never request payments, ask for financial account information or sensitive information such as social security numbers. Privacy Notice for California Residents SMS Messaging Privacy Policy

Posted 30+ days ago

ASCENDING logo
ASCENDINGFairfax, VA
Our client, one of the largest Amazon Web Services (AWS) partners for data services, is looking for a Mid to Senior level Java Developer to contribute to join their team of technologists to build and contribute to large-scale, innovative projects. Technological and career growth opportunities are a natural and every day part of the working environment. Job Responsibilities: Assist customer with developing Kibana dashboard to highlight changes to account configs and highlight security anomalies. Ensures DevOps practices and tool-sets are followed to support our drive towards continuous delivery across our multi-platform application portfolio. Assist customer to build reference architectures to enable customer teams adopt out-of-region DR. Develop tools and products that are used in central automation and contribute to development milestones. Continually look for ways to innovate and improve testing process to gain efficiencies. Effectively communicate DevOps activities and project risk in oral and written formats. Build solutions and be able to think holistically about engineering challenges and architecture. Excellent understanding in computer science fundamentals - Algorithm design, Problem solving, Complexity analysis and data structures. Expert experience in design and implementation in terms of Infrastructure Creation/Configuration and CI/CD. Evangelize DevOps tools and processes. Demonstrable experience in architecting, designing and developing. DevOps toolsets Good troubleshooting and debugging mindset. Awareness and exposure in deploying Deep Learning models and Big data applications. Qualifications: At least 3 years of hands-on experience  in the areas/tools listed below Strong background in  Linux/Unix  Administration Experience with automation/configuration management. Grafana (creating dashboards from multiple data sources. creating dashboards, presenting metrics in custom charts) Splunk Strong experience in scripting such as  Python and TypeScript. Hands-on AWS  platform experience,  AWS Certification  will be a big plus. Ability to use a variety of open-source technologies and cloud services  (experience with AWS is required). Some services like AWS Config, CloudTrail, SCPs, KMS, VPC, CDK. Working knowledge of continuous integration platforms such as Jenkins,and various other deployment tools Working knowledge of source code repositories such as  Git  (BitBucket), Subversion, etc. Thanks for applying! Powered by JazzHR

Posted 30+ days ago

Infinitive Inc logo
Infinitive IncMcLean, VA
About Infinitive: Infinitive is a data and AI consultancy that enables its clients to modernize, monetize and operationalize their data to create lasting and substantial value. We possess deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable high return on investment. Infinitive has been named “Best Small Firms to Work For” by Consulting Magazine 7 times most recently in 2024. Infinitive has also been named a Washington Post “Top Workplace”, Washington Business Journal “Best Places to Work”, and Virginia Business “Best Places to Work.”   About the Role: We are seeking a skilled DevOps Engineer with data engineering experience to join our dynamic team. The ideal candidate will have expertise in ElasticSearch, CI/CD, Git, and Infrastructure as Code (IaC) while also possessing experience in data engineering. You will be responsible for designing, automating, and optimizing infrastructure, deployment pipelines, and data workflows. This role requires close collaboration with data engineers, software developers, and operations teams to build scalable, secure, and high-performance data platforms. Key Responsibilities: DevOps & Infrastructure Management: Design, deploy, and manage ElasticSearch clusters, ensuring high availability, scalability, and performance for search and analytics workloads. Develop and maintain CI/CD pipelines for automating build, test, and deployment processes using tools like Jenkins, GitHub Actions, GitLab CI/CD, or ArgoCD. Manage and optimize version control workflows using Git, ensuring best practices for branching, merging, and release management. Implement Infrastructure as Code (IaC) solutions using Terraform, CloudFormation, or Ansible for cloud and on-prem infrastructure. Automate system monitoring, alerting, and incident response using tools such as Prometheus, Grafana, Elastic Stack (ELK), or Datadog. Data Engineering & Pipeline Automation: Collaborate with data engineering teams to design and deploy scalable ETL/ELT pipelines using Apache Kafka, Apache Spark, Kinesis, Pub/Sub, Dataflow, Dataproc, or AWS Glue. Optimize data storage and retrieval for large-scale analytics and search workloads using ElasticSearch, BigQuery, Snowflake, Redshift, or ClickHouse. Ensure data pipeline reliability and performance, implementing monitoring, logging, and alerting for data workflows. Automate data workflows and infrastructure scaling for high-throughput real-time and batch processing environments. Implement data security best practices, including access controls, encryption, and compliance with industry standards such as GDPR, HIPAA, or SOC 2. Required Skills & Qualifications: 3+ years of experience in DevOps, Data Engineering, or Infrastructure Engineering. Strong expertise in ElasticSearch, including cluster tuning, indexing strategies, and scaling. Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI/CD, or ArgoCD. Proficiency in Git for version control, branching strategies, and code collaboration. Experience with Infrastructure as Code (IaC) using Terraform, CloudFormation, Ansible, or Pulumi. Solid experience with cloud platforms (AWS, GCP, or Azure) and cloud-native data engineering tools. Proficiency in Python, Bash, or Scala for automation, data processing, and infrastructure scripting. Hands-on experience with containerization and orchestration (Docker, Kubernetes, Helm). Experience with data engineering tools, including Apache Kafka, Spark Streaming, Kinesis, Pub/Sub, or Dataflow. Strong understanding of ETL/ELT workflows and distributed data processing frameworks. Preferred Qualifications: Experience working with data warehouses and lakes (BigQuery, Snowflake, Redshift, ClickHouse, S3, GCS). Knowledge of monitoring and logging solutions for data-intensive applications. Familiarity with security best practices for data storage, transmission, and processing. Understanding of event-driven architectures and real-time data processing frameworks. Certifications such as AWS Certified DevOps Engineer, Google Cloud Professional Data Engineer, or Certified Kubernetes Administrator (CKA). Powered by JazzHR

Posted 30+ days ago

North South Consulting Group logo
North South Consulting GroupElizabethtown, KY
The Salesforce DevOps Engineer will ensure an Air Force recruiting platform solution runs smoothly from development to deployment. You will build and maintain continuous integration/continuous delivery (CI/CD) pipelines, manage environments, and support automated testing and release cycles. This role is central to ensuring the platform remains reliable, scalable, and flexible as new features are rolled out. By optimizing the development lifecycle, you will directly contribute to delivering new capabilities faster to recruiters in the field. This role is fully remote. Key Responsibilities Maintain CI/CD pipelines for Salesforce/MuleSoft. Manage sandbox and production environments. Automate testing and deployments. Ensure stability and compliance. Required Qualifications Bachelor’s degree in IT or related field. 3+ years DevOps or environment management experience. Certification: Salesforce DevOps or tools certification (e.g., Copado, Jenkins, GitLab). Desired Qualifications AWS or cloud certifications. DoD-compliant DevOps practices. This position is contingent upon contract award.  Powered by JazzHR

Posted 30+ days ago

A logo
AnthologyAINew York City, NY
AnthologyAI’s mission is to create an equitable and fair data economy online by giving users ownership of their personal data, the ability to securely control it, and various ways to put it to work to create value with brands they trust, while always preserving their privacy. We’re on a mission to build Personalized AI. We’re led by industry veterans, backed by powerhouse investors, and crewed by the brightest minds in the game. We exist to make the internet a better place for all. The DevOps Engineer plays a crucial role at AnthologyAI in ensuring the smooth and effective operation of AnthologyAI’s platform. The position will report into the VP of Engineering. Your skills will add to the team’s expertise in designing, implementing, and maintaining the systems and infrastructure that form the backbone of AnthologyAI’s platform. You will work closely with members of AnthologyAI’s Engineering team to troubleshoot issues, optimize performance, and deploy updates, ultimately ensuring high availability, security, and scalability. By monitoring system health, analyzing performance metrics, and identifying potential vulnerabilities, you will contribute to the overall stability and reliability of the platform, enabling the company to deliver uninterrupted services, enhance customer satisfaction, and drive business growth.  Responsibilities: Collaborate with senior engineers to design, implement, and maintain our systems infrastructure. Utilize Terraform to automate the provisioning, configuration, and management of infrastructure resources across multiple cloud platforms. Implement and configure monitoring and alerting solutions using Datadog to ensure the health and performance of our systems. Work with Helm to manage deployments and updates of containerized applications within Kubernetes clusters. Assist in the deployment and management of services on public cloud platforms such as AWS and GCP. Contribute to the development of observability practices and tools to enable effective monitoring, logging, and debugging of distributed systems. Administer and support distributed systems, ensuring their reliability, performance, and scalability. Utilize your experience with Kafka to build and manage distributed streaming platforms. Apply your knowledge of MongoDB or other NoSQL databases to support their administration and performance tuning. Troubleshoot and resolve infrastructure-related issues, ensuring minimal impact on operations. Collaborate with cross-functional teams to ensure the seamless integration of new services and applications into our existing infrastructure. Stay up-to-date with emerging technologies and industry trends, continuously improving your technical skills and knowledge. Qualifications: Bachelor's degree in Computer Science, Engineering, Information Technology or a related field. Strong understanding of infrastructure-as-code (IaC) principles and experience using Terraform for infrastructure provisioning and management. Familiarity with monitoring and observability tools such as Datadog to track system performance, troubleshoot issues, and ensure scalability. Proficiency in managing containerized applications using Helm within Kubernetes clusters. Experience with public cloud platforms, preferably AWS and GCP, including deploying and managing services. Knowledge of distributed systems concepts, best practices, and hands-on experience with their administration. Experience with Kafka for building and managing distributed streaming platforms. Strong problem-solving skills and the ability to analyze and resolve complex infrastructure issues. Excellent communication and collaboration skills, with the ability to work effectively in a team-oriented environment. Self-motivated and eager to learn new technologies and stay updated with industry advancements. Bonus points for: Relevant certifications such as AWS Certified Solutions Architect, GCP Cloud Engineer, or Kubernetes certifications. Familiarity with additional tools and technologies related to infrastructure automation, containerization, and distributed systems. Salary: $160,000 - $180,000 base. Salary may vary based on experience. We offer a competitive salary, equity, and opportunities for professional development. **The position is remote, but we prefer candidates based in New York City due to occasional in-person meetings. **There is currently no relocation and/or visa (immigration) assistance provided for this position. Powered by JazzHR

Posted 30+ days ago

BookedBy logo
BookedByAustin, TX
Who we are: Welcome to BookedBy, an industry-leading business management solution and scheduling software for salons, spas, and barbershops everywhere. BookedBy — with headquarters in Austin, TX — features more than 100 employees across three continents and powers thousands of locations worldwide with top brands such as Sport Clips Haircuts, Diesel Barbershop, Perfect Look, Sharkey’s Cuts for Kids, Hairzoo, and more. Founded in 2011, BookedBy’s scheduling platform has more than 60 million bookings annually and is expanding into other service-based industries.  Job Summary: We are seeking a Senior DevOps Engineer to help scale and operate the infrastructure behind BookedBy’s platform. You will design, implement, and maintain AWS cloud environments, manage Kubernetes-based services, and build automation frameworks to ensure our systems are highly available, secure, and performant. You will collaborate closely with software engineers to improve developer productivity, streamline CI/CD with GitLab and ArgoCD, and bring operational excellence to production systems. You’ll also leverage AI tools to automate repetitive tasks, accelerate incident response, and continuously optimize system performance.  Key Responsibilities: Design, build, and maintain AWS cloud infrastructure supporting BookedBy’s applications.  Manage and improve Kubernetes clusters (updates, scaling, monitoring, lifecycle management).  Build and optimize CI/CD pipelines with GitLab and ArgoCD, supporting versioning, rollbacks, and progressive delivery (canary, blue/green).  Automate infrastructure provisioning and management using Infrastructure-as-Code (Terraform for infrastructure, Helm for Kubernetes applications, ArgoCD for GitOps-driven deployments).  Enhance observability with metrics, logs, and tracing (e.g., Prometheus, Grafana, Datadog, ELK).  Implement and maintain security best practices, including continuous audits and compliance readiness (e.g., SOC2).  Support application migration to Kubernetes and modernization efforts by deprecating legacy deployment tooling.  Troubleshoot and resolve complex infrastructure and deployment issues in production.  Write and maintain runbooks and operational documentation to improve reliability and response.  Use AI-driven monitoring, anomaly detection, and automation to improve reliability and efficiency.  Participate in an on-call rotation, ensuring fast, effective incident response.  Qualifications & Skills: 5+ years of DevOps / SRE / Platform Engineering experience.  Strong expertise with AWS services (EC2, RDS, S3, Route53, ELB/ALB, IAM, networking).  Hands-on experience managing Kubernetes clusters.  Proficiency with containers (Docker) and orchestration patterns.  Strong knowledge of Terraform (infrastructure), Helm (Kubernetes packaging), and ArgoCD (GitOps deployments).  Experience building and maintaining CI/CD pipelines (GitLab preferred, but familiarity with Jenkins, GitHub Actions, or Bamboo is welcome).  Familiarity with observability stacks (Prometheus, Grafana, Datadog, ELK).  Strong scripting/automation skills (Python, Bash, Ruby).  Proficiency with Linux and command-line troubleshooting.  Solid understanding of distributed systems, scaling, and cloud security practices.  Strong collaboration and communication skills.  Excitement about applying AI to automate workflows, predict incidents, and enhance operational efficiency.  What we offer: Comprehensive Medical, Dental & Vision Insurance.  15 Days of Paid Time Off to recharge.  Hybrid Work Schedule: 3 days per week in-office (Austin HQ).  Paid Parental Leave.  In-Office Gym.   Professional Development opportunities with access to courses and learning resources.  Stock Options.  Powered by JazzHR

Posted 30+ days ago

C3 AI logo
C3 AIRedwood City, CA
C3 AI (NYSE: AI), is the Enterprise AI application software company. C3 AI delivers a family of fully integrated products including the C3 Agentic AI Platform, an end-to-end platform for developing, deploying, and operating enterprise AI applications, C3 AI applications, a portfolio of industry-specific SaaS enterprise AI applications that enable the digital transformation of organizations globally, and C3 Generative AI, a suite of domain-specific generative AI offerings for the enterprise. Learn more at: C3 AI We are looking for a highly motivated Senior Backend Engineer to join our Infrastructure & DevOps team. You will play a critical role in designing, building, and maintaining the backbone of our scalable cloud environments, CI/CD systems, and developer tooling. Your work will directly support engineering productivity and system reliability across the company. Responsibilities Design and maintain robust CI/CD pipelines to automate testing, integration, and deployment across a distributed development organization. Develop and maintain infrastructure as code (IaC) using Terraform and Ansible to provision and manage cloud environments. Build and maintain internal tools and services written in Java, Groovy, Python, and Shell. Support engineering workflows and release automation. Manage and enhance build systems leveraging Gradle and Maven to support large-scale Java-based applications. Containerize and orchestrate services using Docker, ensuring consistency across development, staging, and production environments. Deploy and manage services in Kubernetes (K8s) clusters, supporting scalability and high availability across environments. Collaborate with cross-functional teams to improve deployment workflows, monitor systems, and respond to incidents. Mentor junior engineers and contribute to improving engineering standards, tooling, and processes. Qualifications B.S./M.S. in Computer Science, Engineering, or a related technical discipline. 5+ years of backend or infrastructure engineering experience in production environments. Strong experience with CI/CD pipeline design. Proficiency in Terraform, Ansible, and infrastructure automation in cloud environments (e.g., AWS, Azure, GCP). Hands-on experience with Java, Groovy, Python, Shell scripting, and configuration of Gradle/Maven build systems. Deep understanding of Docker containerization and lifecycle management. Experience with monitoring, logging, and observability tools is a plus. Passion for clean, maintainable code, with a strong emphasis on automation and developer enablement. Nice to have: Familiarity with major cloud providers such as Google Cloud Platform (GCP), Microsoft Azure, and Amazon Web Services (AWS). C3 AI provides excellent benefits, a competitive compensation package and generous equity plan.  California Base Pay Range $155,000 — $198,000 USD C3 AI is proud to be an Equal Opportunity and Affirmative Action Employer. We do not discriminate on the basis of any legally protected characteristics, including disabled and veteran status. 

Posted 30+ days ago

K logo
Keeper Security, Inc.El Dorado Hills, CA
We are seeking a highly skilled and driven Senior Software Engineer to join our Keeper Integrations team. You’ll bring a collaborative spirit, strong full stack development expertise, and excellent communication skills to help us deliver world-class integrations. This is a 100% remote position, with the option for a hybrid schedule for candidates based in the El Dorado Hills, CA, or Chicago, IL metro areas. Keeper’s cybersecurity software is trusted by millions of people and thousands of organizations, globally. Keeper is published in 21 languages and is sold in over 120 countries. Join one of the fastest-growing cybersecurity companies and be responsible for expanding and architecting Keeper's integration in the AWS cloud with the latest technology and tools! About Keeper Keeper Security is transforming cybersecurity for organizations globally with zero-trust privileged access management built with end-to-end encryption. Keeper’s cybersecurity solutions are FedRAMP and StateRAMP Authorized, SOC 2 compliant, FIPS 140-2 validated, as well as ISO 27001, 27017 and 27018 certified. Keeper deploys in minutes, not months, and seamlessly integrates with any tech stack to prevent breaches, reduce help desk costs and ensure compliance. Trusted by millions of individuals and thousands of organizations, Keeper is the leader for password, passkey and secrets management, privileged access, secure remote access and encrypted messaging. Learn how our zero-trust and zero-knowledge solutions defend against cyber threats at KeeperSecurity.com . About the Role Keeper Security is hiring a senior engineer to lead the integration ecosystem for Keeper Secrets Manager (KSM). In this role, you will design, develop, and maintain deep integrations with industry-leading automation, DevOps, and orchestration platforms—including Red Hat Ansible Automation Platform, HashiCorp Vault, Terraform, GitHub Actions, and more. Your work will enable Keeper customers to seamlessly secure credentials and secrets across CI/CD pipelines, infrastructure-as-code deployments, and IT automation workflows. Responsibilities Lead the design, development, and maintenance of certified integrations for Ansible Automation Platform, HashiCorp Vault, Terraform, Jenkins, GitHub Actions, Kubernetes, and other automation tools Build Ansible Custom Credential Types, certified Ansible Collections, and Execution Environments with KSM SDK and dependencies Develop Vault plugins in Go, Terraform providers using the Plugin Framework, and GitHub Actions for secure secret management Refactor and enhance existing integrations for improved usability, IDE support, and certification readiness Implement automated testing pipelines, including unit, functional, and CI/CD publishing workflows Ensure security best practices for secret injection, ephemeral credentials, and API access patterns Produce clear, developer-friendly documentation, examples, and reference architectures Requirements 5+ years of professional software engineering experience with Python, Go, and/or Node.js Proven track record building production-grade integrations or plugins for automation/orchestration platforms Strong understanding of API security, secret management, and secure coding practices Hands-on experience with Docker and building containerized runtimes Built Ansible Collections, modules, action plugins, Execution Environments, and Custom Credential Types for Automation Platform/Tower, with familiarity in Automation Hub certification Developed Vault plugins in Go (auth methods, secrets engines) and managed plugin lifecycle (mount, enable, upgrade, versioning) Shipped Terraform providers using the Plugin Framework, with CRUD logic, acceptance tests, and Terraform Registry readiness Ability to design and implement automated test pipelines for integrations Strong communication skills for technical documentation and collaboration Preferred Requirements Supply chain hardening experience (npm, Go, Python) contributions to open-source Vault/Terraform/Ansible/Actions Kubernetes automation experience Benefits Medical, Dental & Vision (inclusive of domestic partnerships) Employer Paid Life Insurance & Employee/Spouse/Child Supplemental life Voluntary Short/Long Term Disability Insurance 401K (Roth/Traditional) A generous PTO plan that celebrates your commitment and seniority (including paid Bereavement/Jury Duty, etc) Above market annual bonuses Keeper Security, Inc. is an equal opportunity employer and participant in the U.S. Federal E-Verify program. We celebrate diversity and are committed to creating an inclusive environment for all employees. Classification: Exempt

Posted 30+ days ago

A logo
ACSC Auto Club Of Southern CalifCosta Mesa, California
MuleSoft DevOps Engineer As a MuleSoft DevOps Engineer, you are responsible for development, support, maintenance and implementation of complex components of a project module. What You’ll Do: Design, build, and maintain efficient, reusable, and reliable MuleSoft APIs and integrations. Create new B2B/B2C integrations and improve existing production system integrations. Monitor daily integration workflows, troubleshoot issues, and ensure performance. Work with product owners, architects, and business teams to understand and translate requirements. Identify bottlenecks and bugs, and implement solutions. Maintain code quality, organization, and automation. Configure MuleSoft platform, Run Time Fabric, and integrate with authentication/authorization systems. Set up SLA alerts, transaction monitoring, and performance tuning. Ensure API security through proper authentication, authorization, and endpoint configuration. What You’ll Need: Bachelor’s degree in Computer Science, Engineering, or related field. 5+ years in IT, with at least 2+ years in API integration development. Strong MuleSoft development skills in Anypoint Studio, RAML, and Swagger. Hands-on experience with Run Time Fabric, CI/CD tools (Maven, Jenkins, Nexus, Artifactory, Git). Proficiency with REST, HTTP, MQ, JSON, SOAP protocols. Experience with Java, JSON, XML, SOAP, J2EE frameworks. Familiarity with cloud/hybrid platforms (AWS, CloudHub, GCP). Understanding of API security standards (OAuth 2.0, OpenID, JWT, x509 certificates). Knowledge of data formats/standards (XML, JSON) and transformation. Basic Linux command skills and scripting knowledge (JavaScript, PowerShell). Strong troubleshooting, debugging, and performance optimization skills. #LI-SS1 The starting pay range for this position is: $129,400.00 - $172,000.00 Additionally, for full time positions, you will be eligible to participate in our incentive program based upon the achievement of organization, team and personal performance. . Remarkable benefits: Health coverage for medical, dental, vision401(K) saving plan with company match AND Pension Tuition assistancePTO for community volunteer programs Wellness programEmployee discountsAuto Club Enterprises is the largest federation of AAA clubs in the nation. We have 14,000 employees in 21 states helping 17 million members. The strength of our organization is our employees. Bringing together and supporting different cultures, backgrounds, personalities, and strengths creates a team capable of delivering legendary, lifetime service to our members. When we embrace our diversity – we win. All of Us! With our national brand recognition, long-standing reputation since 1902, and constantly growing membership, we are seeking career-minded, service-driven professionals to join our team. “Through dedicated employees we proudly deliver legendary service and beneficial products that provide members peace of mind and value.” AAA is an Equal Opportunity Employer The Automobile Club of Southern California will consider for employment all qualified applicants, including those with criminal histories, in a manner consistent with the requirements of applicable federal, state, and local laws, including the City of Los Angeles’ Fair Chance Initiative for Hiring Ordinance (FCIHO), the Unincorporated Los Angeles County (ULAC) regulation, and the California Fair Chance Act (CFCA).

Posted 1 week ago

Builders Capital logo
Builders CapitalPuyallup, WA
We are looking for a Senior Software Development Engineer: DevOps Lead is an experienced Senior Development Engineer with robust experience in implementing and maintaining the DevOps CI/CD implementation for production enterprise-class services. Taking the checked-in code through the build, test, and different stages of deployment, CI/CD is a critical service for the deployment of robust, mission-critical applications. The Senior Software Development Engineer: DevOps Lead is a combined role as once the CI/CD services are established, they transition to maintenance mode. The DevOps Lead will oversee the expansion of CI/CD to adjacent systems as needed. On a day-day basis, the Senior Software Development Engineer: DevOps Lead participates in the daily development of the BIMQuote Application and supporting services as part of an Agile development team. The BIMQuote application is a scalable fintech solution that integrates with adjacent services managing loan status for Builders Capital, as well as with building materials partners such as manufacturers and distributors. BIMQuote (Building Information Modeling Quote) is an innovative platform which provides builders with the purchasing power of national builders to provide the best value to builders while providing critical project coordination between suppliers and builders. BIMQuote takes a builder’s 2D plans and creates 3D models of single family or multi-family homes. From the 3D model it creates accurate, detailed lists of the required building materials and obtains quotes from the building materials suppliers with volume discounts to provide quality products to builders in a timely and efficient manner. The role includes the following: Design and implement the DevOps environment in support of the BIMQuote Application Platform, including the server application as well as test and verify the supporting services. Develop, along with the rest of the small team, the BIMQuote server application, the associated test code, and verify that it meets the design criteria. Participate in code reviews with the other team members to continue to improve the way we design and implement our coding practices. As the business needs require, extending the BIMQuote application to include third-party partner organizations for interoperability purposes, including but not limited to: (partial or complete) inventory catalogs download and sync, local inventory and pricing information, quotes, purchase order, invoice, credits/returns and payment information. Partner organization order delivery tracking and status. Requirements Degree in Engineering, Computer Science, Computer Engineering or similar discipline. 7+ years of relevant work experience software development with at least 3 years of enterprise application development experience. Successful design, deployment and ongoing operation of a DevOps CI/CD production environment of at least one enterprise-class application. Experience with Docker and Kubernetes container and scaling solutions. Proven track record of collaboration in a small team, and successful ability to influence others and achieving shared goals. Able to be flexible in a agile environment, identify gaps, communicate and act as needed. Effective system and application design experience. Performance and accuracy will be critical for this application to complete the end-end transactional requirements and yet be responsive to the customers (internal and external). Experienced C++ development plus web development skills is essential for this role to provide performance capabilities and yet extend to a web-based front-end experience. Database experience. There are aspects of inventory validation, loan status, available funds, purchase orders and matching invoices to authorize payments. Demonstrated situational leadership and self-awareness. Preferred Experience Design and deployment of multiple DevOps CI/CD production environments. Experience with a messaging-based workflow engine. There are several steps and subtasks involved in the evolution of a digital end to end process, including the transactional engine. The workflow engine assigns and monitors these processes, automatically assigning items to the appropriate group and escalating as needed. Strategic elements of the workflow are in the process of incorporating AI capabilities into the existing workflow. Awareness of the training, models and maintenance needs of AI/ML-based solutions are desirable. Benefits At BIMQuote, we believe in taking care of our team. Here’s a glimpse of the benefits that come with joining us: · Health Insurance – We’ve got you covered! BIMQuote pays 100% of your medical insurance premiums to keep you healthy and stress-free, offering a PPO and HSA plans. Competitive Compensation – We offer competitive wages that reward your expertise and hard work. Paid Time Off – Take time to recharge with 3 weeks of paid time off each year. Paid Holidays – Enjoy 10 paid holidays throughout the year so you can spend quality time with family, friends, or doing whatever you love. Health Savings Account (HSA) – We contribute annually into your HSA account (prorated from your hire date) and for those the select our HSA plan. This role is primarily remote, with a preference for candidates in the greater Seattle/Eastside area. The team meets in person every two weeks (Bellevue, WA or Puyallup, WA) for collaboration, planning, and stakeholder engagement. Ready to Shape the Future of Talent at BIMQuote? If you’re ready to make an impact in a fast-growing organization that values innovation, teamwork, and excellence, we’d love to hear from you. Apply now or send us a message to learn more about this exciting opportunity! Compensation for this position falls between $130,000 and $160,000 annually, depending on qualifications and background. This job posting highlights the most critical responsibilities and requirements of the job; however, there may be additional duties, responsibilities, and qualifications for this job. Construction Loan Services II LLC (BIMQuote) and its affiliates are Equal Employment Opportunity (EEO) employers and welcome all qualified applicants. This is a full-time non-exempt position. The job description contained herein is not intended to be a comprehensive list of the duties and responsibilities of the position, which may change without notice.

Posted 4 weeks ago

T logo
Two95 International Inc.Austin, TX
Title: Cloud infrastructure Devops Engineer Position: 12 + Months Contract Location ; Austin,TX Rate : $Market Requirements Minimum qualifications: ·         Bachelor’s degree in Computer Science or equivalent practical experience. ·         5-8+ years of experience with the following technologies:  ·         Python, Terraform, Ansible, Concourse CI/CD, Vault, Identify Management ·         Experience with Unix / Linux operating system internals and administration (e.g., filesystems, inodes, system calls, hardening) and networking (e.g., TCP / IP, routing, DNS, network topologies, SDN).   Preferred qualifications:   ·         Expertise in designing, analyzing and troubleshooting large-scale distributed systems.  ·         Ability to debug and optimize code and automate routine tasks.  ·         Systematic problem-solving approach coupled with strong communication skills and a Benefits   If Interested please send your updated resume to : rehana.j@two95intl.com and include your rate/Salary requirement along with your contact details with a suitable time when we can reach you. If you know of anyone in your sphere of contacts, who would be a perfect match for this job then, we would appreciate if you can forward this posting to them with a copy to us. We look forward to hearing from you at the earliest.

Posted 30+ days ago

The Swift Group logo

DevOps Engineer

The Swift GroupAugusta, Georgia

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

Locations: Hanover, MD; Columbia, MD; Augusta, GA; San Antonio, TX

The Swift Group is a privately held, mission-driven and employee-focused services and solutions company headquartered in Reston, VA. Our capabilities include Software Development, Engineering & IT, Data Science, Cyber Enablement, Logistics, and Training. Founded in 2019, Swift supports Civilian, Defense, and Intelligence Community customers across the country and around the globe.

We are looking for a DevOps Engineer to join our high-performing team on a dynamic program. In this role, the DevOps Engineer will work hands-on with cloud infrastructure, containerized platforms, and automation frameworks to enable scalable, secure, and efficient delivery of mission-critical systems.

This position requires expertise in Kubernetes, AWS, automation tools and scripting, with the opportunity to support cutting-edge Big Data platforms. You will help architect, deploy, and maintain infrastructure for both cloud and on-premise systems, while championing reliability, repeatability, and performance.

Responsibilities:

  • Design, implement, and maintain cloud-native and hybrid infrastructure solutions in AWS, with a focus on Elastic Kubernetes Service (EKS)
  • Deploy and manage containerized applications using Kubernetes across development and production environments
  • Automate infrastructure provisioning and application deployment using Infrastructure as Code (IaC) tools such as Terraform and Ansible
  • Monitor and support Linux-based systems for availability and performance
  • Collaborate with development teams to integrate CI/CD pipelines and improve release cycles using tools like Git, Flux, and Jenkins
  • Continuously improve processes and tools to ensure high availability and performance
  • Support deployment of portable edge-computing infrastructure to non-cloud (on-premise) environments
  • Diagnose and resolve system issues across the stack, including networking, containers, and cloud services
  • Participate in Agile development activities and contribute to continuous process improvement initiatives
  • Stay current with emerging DevOps technologies, container orchestration strategies, and cloud capabilities

Requirements: 

  • Bachelor’s degree in Computer Science or related field
  • Mid-level candidates: 3-8 years of relevant experience
  • Senior-level candidates: 9-13 years of relevant experience
  • SME-level candidates: 14+ years of relevant experience
  • Hands-on experience with AWS services such as EKS, EC2, EBS, S3, Lambda
  • Proficient in Linux systems administration and troubleshooting
  • Proficient in scripting/programming abilities in Python, Bash, or Go
  • Knowledge of containerization technologies (Docker) and orchestration platforms (Kubernetes)
  • Experience with Terraform, Ansible, and other IaC or configuration management tools
  • Familiarity with Git, Flux, and other development and deployment tools
  • Demonstrated ability to work in Agile environments and use tools such as JIRA and Confluence
  • Must be able to obtain Security+ certification within 60 days of hire
  • Must be available to work on-site 4-5 days per week; flexibility is required to align with customer needs
  • US citizenship and an active Secret clearance; will also accept TS/SCI or TS/SCI with CI Polygraph

Desired Experience:

  • Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc.
  • Work could possibly require some on-call work.

#LI-DI1

#Onsite 

The Swift Group and Subsidiaries are an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status, or any other protected class.

Pay Range: $49,996.80 - $290,004.00

Pay ranges are a general guideline and not intended as a guaranteed and/or implied final compensation or salary for this job opening. Determination of official compensation or salary relies on several different factors including, but not limited to: level of position, complexity of job responsibilities, geographic location, work experience, education, certifications, Federal Government contract labor categories, and contract wage rates. 

At The Swift Group and Subsidiaries, you will receive comprehensive benefits including but not limited to: healthcare, wellness, financial, retirement, education, and time off benefits. 

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall