landing_page-logo
  1. Home
  2. »All Job Categories
  3. »Data Science Jobs

Auto-apply to these data science jobs

We've scanned millions of jobs. Simply select your favorites, and we can fill out the applications for you.

Govini logo
GoviniPittsburgh, PA
Company Description Govini transforms Defense Acquisition from an outdated manual process to a software-driven strategic advantage for the United States. Our flagship product, Ark, supports Supply Chain, Science and Technology, Production, Sustainment, and Modernization teams with AI-enabled Applications and best-in-class data to more rapidly imagine, develop, and field the capabilities we need. Today, the national security community and every branch of the military rely on Govini to enable faster and more informed Acquisition decisions. Job Description We are seeking an inquisitive data scientist to join our team and work with our various datasets to find connections, knowledge, and potential issues in order to help our government clients make more informed decisions. Your role on the Data Science team will be primarily focused on our large datasets, and you will design and implement scalable statistical systems based on your analysis. You will have the opportunity to expand and grow data-driven research across Govini, and lead new areas to apply advanced analytics to drive business results. As part of our team, you must be a data nerd with a strong understanding of the various fundamentals of data science and analysis and know how to bring data to life. In order to do this job well, you must be a highly organized problem-solver and possess excellent oral and written communication skills. You are independent, driven, and motivated to jump in and roll up your sleeves to get the job done. You lead by influence and motivation. You have a passion for great work and nothing less than your best will do. You share our intolerance of mediocrity. You’re uber-smart, challenged by figuring things out and producing simple solutions to complex problems. Knowing there are always multiple answers to a problem, you know how to engage in a constructive dialogue to find the best path forward. You’re scrappy. We like scrappy. We need a creative, out-of-the-box thinker who shares our passion and obsession with quality. This role is a full-time position located out of our office in Pittsburgh, PA. This role may require up to 25% travel Scope of Responsibilities Design experiments, test hypotheses, and build models for advanced data analysis and complex algorithms Apply advanced statistical and predictive modeling techniques to build, maintain, and improve multiple real-time decision systems Make strategic recommendations on data collection, integration, and retention requirements, incorporating business requirements and knowledge of data industry best practices Model and frame business scenarios that are meaningful and which impact critical processes and decisions; transform, standardize, and integrate datasets for client use cases Convert custom, complex and manual client data analysis tasks into repeatable, configurable processes for consistent and scalable use within the Govini SaaS platform Optimize processes for maximum speed, performance, and accuracy; craft clean, testable, and maintainable code Partner with internal Govini business analysts and external client teams to seek out the best solutions regarding data-driven problem solving Participate in end-to-end software development, on an agile team in a scrum process, collaborating closely with fellow software, machine learning, data, and QA engineers Qualifications US Citizenship is Required Required Skills: Bachelor's degree in Computer Science, Computer Engineering, Mathematics, Statistics, or a related field; Master’s or PhD preferred 7 + years of hands-on data science experience 7+ years deriving key insights and KPIs for external and internal customers Regular development experience in Python Prior hands-on experience working with data-driven analytics Proven ability to develop solutions to loosely defined business problems by leveraging pattern detection over large datasets Proficiency in statistical analysis, quantitative analytics, forecasting/predictive analytics, multivariate testing, and optimization algorithms Experience using machine learning algorithms (e.g., gradient-boosted machines, neural networks) Ability to work independently with little supervision Strong communication and interpersonal skills A burning desire to work in a challenging fast-paced environment Desired Skills: Current possession of a U.S. security clearance, or the ability to obtain one with our sponsorship Experience in or exposure to the nuances of a startup or other entrepreneurial environment Experience working on agile/scrum teams Experience building datasets from common database tools using flavors of SQL Expertise with automation and streaming data Experience with major NLP frameworks (spaCy, fasttext, BERT)Familiarity with big data frameworks (e.g., HDFS, Spark) and AWS Familiarity with Git source control management Experience working in a product organization Experience analyzing financial, supply chain/logistics, or intellectual property data We firmly believe that past performance is the best indicator of future performance. If you thrive while building solutions to complex problems, are a self-starter, and are passionate about making an impact in global security, we’re eager to hear from you. Govini is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status or any other characteristic protected by law.

Posted 30+ days ago

Govini logo
GoviniPittsburgh, PA
Company Description Govini transforms Defense Acquisition from an outdated manual process to a software-driven strategic advantage for the United States. Our flagship product, Ark, supports Supply Chain, Science and Technology, Production, Sustainment, and Modernization teams with AI-enabled Applications and best-in-class data to more rapidly imagine, develop, and field the capabilities we need. Today, the national security community and every branch of the military rely on Govini to enable faster and more informed Acquisition decisions. Job Description We are seeking a highly motivated and skilled Data Engineering Manager to join our dynamic team. As a Data Engineering Manager, you will be responsible for leading a team of data engineers in data ingestion, transformation, and the creation of analytics solutions. Your role will be instrumental in driving our data engineering vision and ensuring the successful delivery of high-quality data products and services. In addition to your managerial responsibilities, you must also be willing to jump in and be hands-on when necessary. Your hands-on involvement will play a crucial role in mentoring your team, tackling complex data engineering challenges, and setting a high standard for data quality. If you are passionate about cutting-edge technologies, data quality, and leading a talented team, this role is for you. This is a team member position, working onsite in our Pittsburgh, PA office. This role may require up to 10% travel Scope of Responsibilities Lead and manage a team of data engineers, fostering a collaborative and innovative environment. Provide guidance, mentorship, and support to team members, encouraging their professional growth and development. Coordinate with cross-functional teams to align data engineering efforts with broader organizational goals and user requirements. Develop and articulate a clear data engineering vision that aligns with the company's strategic objectives. Drive the implementation of the data engineering vision, ensuring it is aligned with business priorities and timelines. Oversee the design and development of data pipelines, data models, and data architecture to support analytics and reporting needs. Collaborate with stakeholders to identify and prioritize data engineering projects and initiatives. Allocate resources efficiently and manage project timelines to deliver high-quality solutions on time and within budget. Stay up-to-date with the latest trends, tools, and technologies in the data engineering field. Evaluate and recommend new technologies that can improve data engineering processes and enhance overall efficiency. Develop and implement processes to ensure data quality is built into data pipelines from the outset. Qualifications U.S. Citizenship is required Required Skills: Bachelor's degree in Computer Science, Mathematics, or equivalent experience 7+ years in a technical data engineering role 3+ years of people management Experience implementing streaming data pipelines Experience with various data modeling techniques, such as relational and dimensional Experience with AWS’ Aurora Postgres or MySQL Proficient in SQL, Stored Procedures, Functions Proficient in big data technologies such as Spark, Hadoop, Databricks, Kafka Requires strong analytical ability and attention to detail Desired Skills: Current possession of a U.S. security clearance, or the ability to obtain one with our sponsorship Experience in or exposure to the nuances of a startup or other entrepreneurial environment Experience working with Kubernetes Knowledge of design patterns in the Cloud and Building Well Architected Solutions Working knowledge with large (multiple terabytes) amounts of data We firmly believe that past performance is the best indicator of future performance. If you thrive while building solutions to complex problems, are a self-starter, and are passionate about making an impact in global security, we’re eager to hear from you. Govini is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status or any other characteristic protected by law.

Posted 30+ days ago

Redhorse Corporation logo
Redhorse CorporationDenver, CO
About the Organization Now is a great time to join Redhorse Corporation. We are a solution-driven company delivering data insights and technology solutions to customers with missions critical to U.S. national interests. We’re looking for thoughtful, skilled professionals who thrive as trusted partners building technology-agnostic solutions and want to apply their talents supporting customers with difficult and important mission sets. About the Role Redhorse Corporation is seeking a skilled Data Management Specialist to join our team supporting the Bureau of Land Management's (BLM) National Operations Center. In this critical role, you will be instrumental in designing, developing, and maintaining robust data solutions that directly impact the BLM's ability to manage and protect America's public lands. You'll work closely with BLM staff and project leaders to ensure data integrity, accessibility, and efficient utilization across various systems. This is an opportunity to make a tangible difference in land management and contribute to a mission-driven organization. Key Responsibilities Design, develop, implement, and maintain business data solutions using ESRI's ArcGIS software. Support data collection, consolidation, sharing, and other general data management activities. Determine and document data integrity and quality, identifying and implementing quality control metrics. Work with clients and project leaders to identify GIS and tabular data requirements. Utilize data management techniques, from aggregation to statistical analysis. Maintain metadata and lineage documentation for continually updated datasets. Ensure data requirements, standards, access rules, and business rules are followed. Design and create data reports and reporting tools to support executive decision-making. Analyze and mine business data to identify patterns and correlations. Develop quality control procedures for datasets. Manage data within BLM infrastructure (e.g., ESRI’s ArcGIS software). Identify and document reference data sources, integration processes, and domain values. Participate in weekly/monthly BLM geospatial calls. Prepare weekly and monthly status reports. Required Experience/Clearance Bachelor's degree and a minimum of 10 years of experience in data management. Significant professional experience with ESRI’s ArcGIS software. Professional experience with office automation software (Adobe, Microsoft Word, Excel, Visio, SharePoint). Experience in developing written technical documentation (metadata, training materials, workflow diagrams, etc.). Ability to pass a federal background check (required prior to accessing government computers/networks). Desired Experience Experience with geodatabase schema development. Experience with data replication processes and data quality reporting. Experience with data modeling and working with data stewards and data administrators. Experience with map design and data management in web GIS environments (e.g., ArcGIS Online). Proficiency in using custom or out-of-the-box ESRI ArcGIS toolbox applications. Experience with data analysis and mining techniques beyond basic statistical analysis. Experience supporting a large-scale geospatial data program. Compensation range for this position is the following Starting $85,000/year to $100,000/year Redhorse Benefits include: Medical Dental Vision Healthcare and Dependent Care Flexible Spending Accounts Health Savings Account Life and Disability Voluntary Coverages (Accident, Hospital and Critical Illness) Employee Assistance Plan Retirement Plans Equal Opportunity Employer/Veterans/Disabled Accommodations: If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to access job openings or apply for a job on this site as a result of your disability. You can request reasonable accommodations by contacting Talent Acquisition at Talent-Acquisition@redhorsecorp.com Redhorse Corporation shall, in its discretion, modify or adjust the position to meet Redhorse’s changing needs. This job description is not a contract and may be adjusted as deemed appropriate in Redhorse’s sole discretion.

Posted 2 weeks ago

Redhorse Corporation logo
Redhorse CorporationFalls Church, VA
About the Organization Now is a great time to join Redhorse Corporation. We are a solution-driven company delivering data insights and technology solutions to customers with missions critical to U.S. national interests. We’re looking for thoughtful, skilled professionals who thrive as trusted partners building technology-agnostic solutions and want to apply their talents supporting customers with difficult and important mission sets. About the Role Redhorse is seeking a skilled Mid-Level Data Engineer to join our team supporting the Chief Digital and Artificial Intelligence Office (CDAO) within the Department of Defense (DoD). You will play a crucial role in advancing data-driven decision-making by developing and maintaining robust data pipelines, ensuring the secure and efficient flow of information across diverse environments. This is a high-impact position offering significant contributions to national security initiatives and investments. Key Responsibilities Support the configuration and ingestion of designated structured, unstructured, and semi-structured data repositories into capabilities that satisfy mission partner requirements and support a data analytics and DevOps pipeline to drive rapid delivery of functionality to the client. Maintain all operational aspects of data transfers while accounting for the security posture of the underlying infrastructure and the systems and applications that are supported and monitoring the health of the environment through a variety of health tracking capabilities. Automate configuration management, leverage tools, and stay current on data extract, transfer, and load (ETL) technologies and services. Work under general guidance, demonstrate initiative to develop approaches to solutions independently, review architecture, and identify areas for automation, optimization, right-sizing, and cost reduction to support the overall health of the environment. Apply comprehension of data engineering-specific technologies and services, leverage expertise in databases and a variety of approaches to structuring and retrieving of data, comprehend Cloud architectural constructs, and support the establishment and maintenance of Cloud environments programmatically using vendor consoles. Engage with multiple functional groups to comprehend client challenges, prototype new ideas and new technologies, help to create solutions to drive the next wave of innovation, and design, implement, schedule, test, and deploy full features and components of solutions. Maintain an existing collection of web scraping tools used as the initial step of the ETL process. Identify and implement scalable and efficient coding solutions. Required Experience/Clearance Bachelor’s degree plus 3 years of experience, or a Master’s degree plus 3 years of experience. Experience with Big Data systems, including Apache Spark/Databricks. Experience with ETL processes. Experience in the ADVANA Data Environment. Experience with Amazon Web Services (AWS), Microsoft Azure, or MilCloud 2.0. Experience applying DoD Security Technical Implementation Guides (STIGs) and automating that process. Experience with multiple coding languages. Active Secret security clearance. Equal Opportunity Employer/Veterans/Disabled Accommodations: If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to access job openings or apply for a job on this site as a result of your disability. You can request reasonable accommodations by contacting Talent Acquisition at Talent-Acquisition@redhorsecorp.com Redhorse Corporation shall, in its discretion, modify or adjust the position to meet Redhorse’s changing needs. This job description is not a contract and may be adjusted as deemed appropriate in Redhorse’s sole discretion.

Posted 30+ days ago

N logo
N1 Health Boston, MA
Note: At this time, we are not able to offer visa sponsorship. Candidates must be authorized to work in the United States without the need for current or future sponsorship. N1 Health is an AI Platform company that helps healthcare organizations prioritize, action, and maximize patient and member interactions. N1 Health’s market-leading Applied AI Platform provides healthcare companies with pre-packaged models, curated third party data, and a secure and scalable technology platform that enables the deployment of targeted services at the individual, household, and neighborhood level. Data science-driven insights lead to relevant, specific, and help-first interventions that optimally connect individuals to resources based on their specific needs and the capacity of the system. Only 20% of a person’s health outcomes are driven by their interactions with the health care system, the remaining 80% are driven by external factors. We’re working to empower our customers to utilize data science and digital technologies to better serve their most vulnerable members and patients. We’re passionate, creative, and motivated and looking for team members who are the same. We are enthusiastic learners and believe fundamentally that this is a two-way street - we’ll invest in your learning and growth, just as you’ll advance the company’s mission and support our clients through your work. Role Overview We are seeking a skilled and motivated Data Ops Engineer to join our Technology Enabled Service Delivery function. In this role you will be tasked with managing the broad complexity of healthcare data and creating scalable solutions to deliver to end users. As a client-facing engineer, you will work on teams of data scientists, project managers, and expert staff to manage and understand client data, and will own data specifications, schema design, data pipelines, and ML pipelines for these teams using Python, SQL and CLI tools to maintain, automate and scale your solutions. In this role, you will also think creatively about building new tools that drive innovation for delivering consistent and reliable results, add value to your N1 teams and clients, and communicate with stakeholders about how best to deploy them to drive meaningful healthcare outcomes.. We are looking for motivated teammates who are ready to learn and grow. The Ideal Candidate Has a demonstrated ability to work on complex projects with multiple stakeholders Is a proactive team-player who takes ownership and initiative on a variety of problems Has the ability to tell a compelling data-driven story Has strong written and verbal communication skills with both technical and non-technical audiences Is committed to understanding the healthcare industry, including data, privacy and security requirements Technical Requirements Bachelor's degree or education in computer science/related field 0-2 years of prior software engineering experience Proficiency in Python and SQL, or demonstrated ability to learn new languages quickly Proficiency using GitHub for collaborative development Familiarity with Agile product development, and tools such as Jira, Confluence, etc. Excellent troubleshooting and debugging skills Experience working with AWS/Athena or similar cloud computing platforms (a plus) Familiarity with Linux/CLI tools (a plus) Bonus Equity If you are excited about a role but your experience doesn’t seem to align perfectly with every element of the job description, we encourage you to apply. You may be just the right candidate for this, or one of our many other roles. We celebrate diversity and are committed to creating an inclusive environment for a ll employees. N1 Hea lth is proud to be an Equal Opportunity Employer. Our vision is to foster an environment in which all N1 Health employees of diverse backgrounds and identities feel supported and empowered through the ongoing development of a shared, inclusive culture. We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity, sexual orientation, age, marital status, disability, protected veteran status, or any other legally protected characteristics.

Posted 3 weeks ago

GoFundMe logo
GoFundMeSan Francisco, CA
Want to help us, help others? We’re hiring! GoFundMe is the world’s most powerful community for good, dedicated to helping people help each other. By uniting individuals and nonprofits in one place, GoFundMe makes it easy and safe for people to ask for help and support causes—for themselves and each other. Together, our community has raised more than $40 billion since 2010. Join us! GoFundMe is looking for a dedicated Data & Analytics Intern to join our Data and Decision Science team to help elevate our analytics at GoFundMe. This role will support the Decision Science roadmap through the cultivation, curation, and visualization of timely and relevant data for our team and other stakeholders at GoFundMe. You’re an ideal candidate if you love all things data and want to accelerate your career in data and analytics at a fast-growing organization. You have demonstrated experience working with data, are highly collaborative, and are passionate about evangelizing and showing what is possible with data. This is a 10 week internship program that runs from May 26th, 2026 to August 7th, 2026. The program will be based in San Francisco, CA and interns will be expected onsite three days per week. The Job… Provide data-driven analyses using SQL and/or Looker to inform our roadmap and deliver insights to our stakeholders Inventory BI environment and work with stakeholders to gather usage requirements and develop documentation Clean and organize data from various sources into our Snowflake Warehouse Develop high-level dashboards in Looker to support data-driven strategy and insights Collaborate with the Decision Science team to drive ad hoc requests and define metrics for dashboard development Identify hypotheses, participate in experiment design; make clear, coherent and holistic recommendations based on test results You… Highly data-driven and excited to grow your career in Data Analytics Currently pursuing a degree related to Analytics, Data Science, Computer Science or Operations Research Have an understanding of data visualization and BI tools (i.e. Looker or Tableau) Familiarity with SQL and cleaning/processing data Passion for organization and documentation Strong written and verbal communication skills and ability to build strong relationships and trust with all partners at GoFundMe both technical and non-technical Nice to have… Demonstrated experience with data science using Python Why you’ll love it here Make an Impact : Be part of a mission-driven organization making a positive difference in millions of lives every year. Innovative Environment : Work with a diverse, passionate, and talented team in a fast-paced, forward-thinking atmosphere. Collaborative Team : Join a fun and collaborative team that works hard and celebrates success together. Competitive Benefits : Enjoy competitive pay and comprehensive healthcare benefits. Holistic Support : Enjoy financial assistance for things like hybrid work, family planning, along with generous parental leave, flexible time-off policies, and mental health and wellness resources to support your overall well-being. Growth Opportunities : Participate in learning, development, and recognition programs to help you thrive and grow. Commitment to DEI : Contribute to diversity, equity, and inclusion through ongoing initiatives and employee resource groups. Community Engagement : Make a difference through our volunteering and Gives Back programs. We live by our core values: impatient to be great , find a way , earn trust every day , fueled by purpose . Be a part of something bigger with us! GoFundMe is proud to be an equal opportunity employer that actively pursues candidates of diverse backgrounds and experiences. We do not discriminate on the basis of race, color, religion, ethnicity, nationality or national origin, sex, sexual orientation, gender, gender identity or expression, pregnancy status, marital status, age, medical condition, mental or physical disability, or military or veteran status. The hourly rate for this position is $35.00. As this is a hybrid position, the pay rate was determined by role, level, and possible location across the US. Individual pay is determined by work location and additional factors including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific pay rate based on your location during the hiring process. If you require a reasonable accommodation to complete a job application or a job interview or to otherwise participate in the hiring process, please contact us at accommodationrequests@gofundme.com . Learn more about GoFundMe: We’re proud to partner with GoFundMe.org , an independent public charity, to extend the reach and impact of our generous community, while helping drive critical social change. You can learn more about GoFundMe.org’s activities and impact in their FY ‘24 annual report . Our annual “Year in Help” report reflects our community’s impact in advancing our mission of helping people help each other. For recent company news and announcements, visit our Newsroom .

Posted 4 days ago

H logo
Horace Mann Springfield, IL
The Senior Data Analyst is responsible for the design, development, and analysis of complex reports, queries, and data extracts from multiple data sources to support strategic initiatives and enable leadership to efficiently measure performance. Responsibilities: Mine data from primary and secondary sources, clean and prune data to discard irrelevant information, and work with data source owners to clean up data sources and suggest/manage data store(s). Triage code problems and data related issues, and trouble shoot and correct data load exceptions or inaccuracies as required. Analyze and interpret results, pinpoint trends, correlations, and patterns in complicated data sets utilizing statistical toolsets. Develop and maintain data pipelines in support of various analyses across multiple areas. Develop and maintain methods to organize and compare disparate data sets from various data structures including, but not limited to databases, flat files, XML, JSON, etc. Develop and maintain proprietary packages to support ETL, storage, access, and analysis operations used for prototyping and product deployment. Responsible for developing a process for prioritizing data need requests, and to support and communicate plans and status to customers and senior leadership. Review and refine the data governance strategy for S&G, including but not limited to data flow documentation, data dictionaries, governance methodology and identifying “one source of truth” for all data sources within S&G Partner with Information Technology team to define a data management strategy for S&G. Qualifications A Bachelor’s Degree in statistics, math, engineering, computer science, economics, business, or a related technical field. 3-5 years of overall data analyst experience, preferably in the insurance industry Strong communication, strategic thinking and involvement in cross divisional data projects. Build data models and internal/external reports, analyze the data, identify trends and communicate results. Experience developing and maintaining ETL and/or modeling data pipelines. Extensive experience with query tools (i.e. SQL, Business Objects) as well as statistical modeling and data visualization tools (Qlik or PowerBI preferred). Advanced-level knowledge of relational databases, working with large/complex data sets, data warehousing, data mining, and data mapping. Working understanding of an insurance industry, highly preferred. Experience solving real-world data problems with code. Demonstrates problem solving, decision-making, and process-improvement skills. Strong intellectual curiosity and capabilities. Familiarity and experience with competitive analysis, and communicating to a diverse audience of stakeholders. Ability to manage deliverables and expectations across multiple key stakeholders. Pay Range: $71,500.00 - $105,400.00 Salary is commensurate to experience, location, etc. #LI-NW1 Horace Mann was founded in 1945 by two Springfield, Illinois, teachers who saw a need for quality, affordable auto insurance for teachers. Since then, we’ve broadened our mission to helping all educators protect what they have today and prepare for a successful tomorrow. And with our broadened mission has come corporate growth: We serve more than 4,100 school districts nationwide, we’re publicly traded on the New York Stock Exchange (symbol: HMN) and we have more than $12 billion in assets. We’re motivated by the fact that educators take care of our children’s future, and we believe they deserve someone to look after theirs. We help educators identify their financial goals and develop plans to achieve them. This includes insurance to protect what they have today and financial products to help them prepare for their future. Our tailored offerings include special rates and benefits for educators. EOE/Minorities/Females/Veterans/Disabled. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status For applicants that are California residents, please review our California Consumer Privacy Notice All applicants should review our Horace Mann Privacy Policy

Posted 1 week ago

NISC logo
NISCCedar Rapids, IA
Company Overview: NISC develops and implements enterprise-level and customer-facing software solutions for over 960+ utilities and broadbands across North America. Our mission is to deliver technology solutions and services that are Member-focused, quality driven and valued priced. We exist to serve our Members and help them serve their communities through our innovative software products, services and outstanding customer support. NISC has been ranked in ComputerWorld’s Best Places to Work for 20+ years, and we are looking for qualified individuals to join our team. Position Overview: We’re looking for a Data Engineer Intern to join our dynamic Data Engineering team. This internship is a great opportunity for students who are passionate about data, cloud technologies, and building scalable data solutions. You’ll gain hands-on experience working alongside experienced engineers on real-world projects that support our enterprise software and data strategy. To Learn more about NISC's internships, click HERE . Current applications submitted will be under consideration for Summer 2026 (May - August) Work Schedule: Hybrid from one of our office locations: Cedar Rapids, IA Lake Saint Louis, MO Mandan, ND Requirement: Minimum of working 3 days per week out of an office location and ability to work up to all 5 days a week from an office location. Essential Functions: Assist in building and maintaining data pipelines that support analytics and application development Learn about and contribute to data architecture using AWS and Databricks Help gather, clean, and transform data from various sources into usable formats Collaborate with software developers, data analysts, and product teams to understand data needs Participate in team meetings, code reviews, and brainstorming sessions Explore opportunities to automate manual processes and improve data workflows Contribute to documentation and internal knowledge sharing Ensures that all information is appropriately entered and utilized in Confluence and/or the iVUE Support tool Desired Job Experience: Familiarity with SQL and at least one programming language (e.g., Python, Java, or Scala) Interest in cloud platforms (especially AWS) and big data tools like Spark or Databricks Exposure to data pipeline tools (e.g., Airflow, Hevo Data), preferred Experience with cloud services like AWS S3, Lambda, or EC2, preferred Understanding of data warehousing or lakehouse concepts, preferred Familiarity with version control systems like Git, preferred Strong problem-solving skills and a willingness to learn new technologies Ability to work both independently and collaboratively in a team environment Strong problem solving skills and attention to detail. Strong verbal and written, interpersonal, and communication skills. Ability to effectively adapt to change. Ability to interact in a positive manner with internal and external contacts. Ability to maintain the highest level of professionalism, ethical behavior, and confidentiality. Commitment to NISC's Statement of Shared Values. NISC’s Shared Values & Competencies: We’re a cooperative, which means we’re owned by the Members we serve. It also means that our focus is on taking care of our Members and our employees, rather than having a big bottom line. Quality service and innovative technology starts with happy and dedicated employees. Join our team and learn for yourself what sets NISC apart. Integrity – We are committed to doing the right thing – always. Relationships – We are committed to building and preserving lasting relationships. Innovation – We promote the spirit of creativity and champion new ideas. Teamwork – We exemplify the cooperative spirit by working together. Empowerment – We believe individuals have the power to make a difference. Personal Development – We believe the free exchange of knowledge and information is absolutely necessary to the success of each individual and the organization. Why Intern at NISC: Work on meaningful projects that impact real customers Learn from experienced engineers and mentors Gain exposure to enterprise-level data architecture and tools Be part of a collaborative, mission-driven culture Potential for future full-time opportunities Desired Education and/or Certification(s): ​​High School diploma or equivalency required Pursuing Bachelor’s Degree in a computer science related field Minimum Physical Requirements: The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this position. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. While performing the essential functions of this position, employees must be able to see and communicate. Employees are regularly required to maintain a stationary position, move, and operate computer keyboards or office equipment. Disclaimer: Management may modify this job description by assigning or reassigning duties and responsibilities at any time.

Posted 30+ days ago

R logo
Raft Company WebsiteHickam AFB, HI
This is a U.S. based position. All of the programs we support require U.S. citizenship to be eligible for employment. All work must be conducted within the continental U.S. Who we are: Raft ( https://TeamRaft.com ) is a customer-obsessed non-traditional small business with a purposeful focus on Distributed Data Systems, Platforms at Scale, and Complex Application Development, with headquarters in McLean, VA. Our range of clients includes innovative federal and public agencies leveraging design thinking, cutting-edge tech stack, and cloud-native ecosystem. We build digital solutions that impact the lives of millions of Americans. We’re looking for a Data Engineer to support our customers and join our passionate team of high-impact problem solvers. About the role: Data Engineers on our Distributed Systems team are focused on building data platforms that make it easy for different types of user personas to access data from a central control plane. This includes building backend services, connecting OSS projects in a repeatable and performant way, and extending feature sets. Experience involving data engineering and all things data. Utilizing new tools like DuckDB, Apache Pinot, Apache Superset, and others. As the Staff Data Engineer you will be working closely with Technical SMEs on the government stakeholders to build, configure, and deploy the next generation of data platform for mission-critical needs. Required Qualifications: 2+ years of backend development experience with at least one language like Java, Go, or Python Basic understanding of or exposure to data streaming concepts (Kafka experience preferred but not required) Willingness to learn about government datasets and data transformation processes Strong problem-solving skills and ability to break down complex problems into smaller tasks Interest in staying current with data engineering trends and technologies Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience) Strong communication skills and ability to work collaboratively in a team environment Highly preferred: Internship or coursework experience with data processing or ETL concepts Exposure to cloud platforms (AWS, Azure, GCP) Past use of SQL skills and understanding of database concepts Experience with version control (Git) and basic DevOps practices Previous internship or project work involving data analysis or engineering Interest in or exposure to open source technologies Clearance Requirements: Active Secret security clearance with ability to obtain and maintain TS SCI Work Type: Onsite at Hickam AFB, HI May require up to 25% travel Salary Range : $100,000 - $140,000 The determination of compensation is predicated upon a candidate's comprehensive experience, demonstrated skill, and proven abilities What we will offer you: Highly competitive salary Fully covered healthcare, dental, and vision coverage 401(k) and company match Take as you need PTO + 11 paid holidays Education & training benefits Generous Referral Bonuses And More! Our Vision Statement: We bridge the gap between humans and data through radical transparency and our obsession with the mission. Our Customer Obsession: We will approach every deliverable like it's a product. We will adopt a customer-obsessed mentality. As we grow, and our footprint becomes larger, teams and employees will treat each other not only as teammates but customers. We must live the customer-obsessed mindset, always. This will help us scale and it will translate to the interactions that our Rafters have with their clients and other product teams that they integrate with. Our culture will enable our success and set us apart from other companies. How do we get there? Public-sector modernization is critical for us to live in a better world. We, at Raft, want to innovate and solve complex problems. And, if we are successful, our generation and the ones that follow us will live in a delightful, efficient, and accessible world where out-of-box thinking, and collaboration is a norm. Raft’s core philosophy is Ubuntu: I Am, Because We are . We support our “nadi” by elevating the other Rafters. We work as a hyper collaborative team where each team member brings a unique perspective, adding value that did not exist before. People make Raft special. We celebrate each other and our cognitive and cultural diversity. We are devoted to our practice of innovation and collaboration. We’re an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.

Posted 2 weeks ago

Ownwell logo
OwnwellAustin, TX
Company Background:  Ownwell has developed a leading end-to-end property tax solution that is purpose-built for SFR and CRE investors, operators, and property managers. We have brought Data Science and Machine Learning to a space that is ripe for disruption. We combine a best-in-class technology stack with local market expertise to reduce expenses, increase Net Operating Income, and drive operational efficiency for both our institutional clients and individual homeowners. Ownwell’s solution ensures you have the necessary tools, resources, and information to confidently manage your property taxes. Ownwell has been recognized both in Austin and Nationally, as a top workplace by the likes of Fortune, BuiltIn, Inc, and Best Places To Work. We are well-funded and venture-backed by some of the best investors in the world such as First Round Capital and Bessemer Venture Partners. Our customer base has grown by more than 300% year-over-year with exceptional feedback demonstrating clear product market fit. We are looking for driven and passionate team members who thrive in a collaborative, positive culture where we all win together. If this sounds like the place for you, come help us change the way everyday homeowners manage their real estate across the country. Our Culture People are our superpower! Centered in everything we do is a true sense of team. We listen and we learn from each other. We are on this rocketship together and embrace a fast-paced, truly collaborative environment. We are here to win as a team and as a company. We’ve brought together General Appraisers, Certified Public Accountants, Property Tax Consultants, Data Scientists, PhDs, best-in-class customer support representatives, and more to deliver top results for our customers. Our core values are our guiding principles in everything we do Customer Obsession Take Ownership Do The Right Thing Go Far Together Accelerate Innovation Meet The Engineering Team Ownwell’s engineering team is the backbone of our technology, building and scaling the systems that power our mission. We’re a tight-knit group of builders— engineers, scientists, and problem solvers—who collaborate closely with product, marketing, operations and sales teams to deliver tools that make a real impact for property owners. We value curiosity, ownership, and a willingness to tackle tough challenges head-on. If you enjoy working in a fast-paced environment where your ideas shape the future of the company, you’ll fit right in. As a Data Scientist at Ownwell, you’ll play a pivotal role in leveraging data to refine and enhance our proprietary technology. You will build predictive models, conduct analyses, and develop machine learning algorithms to uncover insights that drive strategic business decisions. Your work will directly support product innovation, marketing effectiveness, operational efficiency, and customer satisfaction. Responsibilities: Develop and deploy predictive models and machine learning solutions to optimize business processes and enable new products and use-cases. Analyze large, complex datasets to generate actionable insights, inform strategy, and improve operational efficiency. Collaborate closely with product teams, engineers, and business stakeholders to integrate data science solutions into customer-facing and internal systems. Design and execute experiments to validate and enhance product features and business initiatives. Continuously improve data quality, model performance, and data governance practices. Maintain and optimize data pipelines and analytical processes, ensuring scalability and reliability. Stay abreast of emerging trends in data science and machine learning methodologies to inform and elevate Ownwell’s data capabilities. Requirements: 2+ years of professional experience in data science or analytics roles. Proficiency in Python for data analysis and machine learning. We do not use R and are exclusively looking for people who are proficient in Python. Familiarity with SQL, including querying databases, data manipulation, and relational database concepts. Experience developing and deploying machine learning models in production environments. Understanding of statistical methods and experiment design. Familiarity with cloud services, such as AWS, for data processing and model deployment. Excellent communication skills, with the ability to clearly present analytical findings to non-technical stakeholders. Ownwell offerings Entrepreneurial culture. Own your career; we are here to support you in the journey. Access to First Round Network to build your community outside of Ownwell. Flexible PTO. We believe in giving you the flexibility to own your time off. In addition to flexible time off, you will get 11 company holidays. We offer the last week of the year to recharge and reset. Competitive health benefits. We care for you and your family's health, as reflected in our benefits coverage. Learning support through a $1,000 stipend per year to enable investing in your individual learning needs. Supporting parental journey. We offer up to 16 weeks of fully paid parental and bonding leave to support your journey as a new parent. As applicable complimentary real estate and tax consulting licensing and renewal Ownwell's vision is to democratize access to real estate expertise. When we say we want to provide access, we mean providing access to everyone. To do that well, we need a team that's broadly representative. We welcome people from all backgrounds, ethnicities, cultures, and experiences. Ownwell is an equal opportunity employer. We do not discriminate on the basis of race, color, ancestry, religion, national origin, sexual orientation, age, citizenship, marital or family status, disability, gender identity or expression, veteran status, or any other status.  

Posted 30+ days ago

Inflection AI logo
Inflection AIPalo Alto, CA
Inflection AI is a public benefit corporation leveraging our world class large language model to build the first AI platform focused on the needs of the enterprise.  Who we are: Inflection AI was re-founded in  March of 2024 and our leadership team has assembled a team of kind, innovative, and collaborative individuals focused on building enterprise AI solutions. We are an organization passionate about what we are building, enjoy working together and strive to hire people with diverse backgrounds and experience.  Our first product, Pi, provides an empathetic and conversational chatbot. Pi is a public instance of building from our 350B+ frontier model with our sophisticated fine-tuning (10M+ examples), inference, and orchestration platform. We are now focusing on building new systems that directly support the needs of enterprise customers using this same approach. Want to work with us? Have questions? Learn more below. About the Role As a Data Platform Engineer, you’ll design the systems and tools that transform raw data into the lifeblood of our models—clean, richly labeled, and continuously refreshing datasets. Your work will span scalable ingestion pipelines, active-learning loops, human-and-AI annotation workflows, and quality-control analytics. The platform you build will power every stage of the model lifecycle—from supervised fine-tuning to retrieval-augmented generation and reinforcement learning. This is a good role for you if you: Have hands-on experience building data or annotation platforms that support large-scale ML workloads Are fluent in Python, SQL, and modern data stacks (Spark/Flink, DuckDB/Polars, Arrow, Kafka/Airflow/Flyte) Understand how class balance, bias, leakage, and adversarial filtering impact ML data quality and model performance Have managed human-in-the-loop labeling operations—including vendor selection, rubric design, and LLM-assisted automation Care deeply about reproducibility and observability—tracking everything from dataset hashes to annotation agreement scores and drift detection Communicate clearly with both research scientists and non-technical stakeholders Responsibilities include: Ingest and transform large multimodal corpora (text, code, audio, vision) using scalable ETL, normalization, and deduplication pipelines Build annotation tools—web UIs, task queues, consensus engines, and review dashboards—to enable fast and accurate labeling by both crowd vendors and internal experts Design active-learning and RLHF data loops that surface high-value samples for human review, integrate synthetic LLM feedback, and support continuous iteration Version, audit, and govern datasets with lineage tracking, privacy controls, and automated quality metrics (toxicity, PII, brand consistency) Collaborate with training, inference, and safety teams to define data specs, evaluate dataset health, and unlock new model capabilities Contribute upstream to open-source data and annotation tools (e.g., Flyte, Airbyte, Label Studio) and share best practices with the community Employee Pay Disclosures At Inflection AI, we aim to attract and retain the best employees and compensate them in a way that appropriately and fairly values their individual contributions to the company. For this role, Inflection AI estimates a starting annual base salary will fall in the range of approximately $175,000 - $350,000 depending on experience. This estimate can vary based on the factors described above, so the actual starting annual base salary may be above or below this range.   Interview Process Apply: Please apply on Linkedin or our website for a specific role. After speaking with one of our recruiters, you’ll enter our structured interview process, which includes the following stages: Hiring Manager Conversation  – An initial discussion with the hiring manager to assess fit and alignment. Technical Interview  – A deep dive with an Inflection Engineer to evaluate your technical expertise. Onsite Interview  – A comprehensive assessment, including: A  domain-specific interview A  system design interview A final conversation with the  hiring manager Depending on the role, we may also ask you to complete a take-home exercise or deliver a presentation. For  non-technical roles , be prepared for a role-specific interview, such as a portfolio review. Decision Timeline We aim to provide feedback within one week of your final interview.    

Posted 30+ days ago

SpaceX logo
SpaceXBastrop, TX
SpaceX was founded under the belief that a future where humanity is out exploring the stars is fundamentally more exciting than one where we are not. Today SpaceX is actively developing the technologies to make this possible, with the ultimate goal of enabling human life on Mars. DATA ENGINEER, REGULATORY (STARLINK) SpaceX is leveraging its experience building rockets and spacecraft to deploy Starlink, the world’s most advanced broadband internet system. Starlink is the world’s largest satellite constellation and is providing fast, reliable internet to 5M+ users worldwide. We design, build, test, and operate all parts of the system – thousands of satellites, consumer receivers that allow users to connect within minutes of unboxing, and the software that brings it all together. We’ve only begun to scratch the surface of Starlink’s potential global impact. As we continue to upgrade and expand the constellation, we’re looking for best-in-class data engineers to join the team. This position will be responsible for designing, building, and maintaining mission-critical services, tools, processes to simplify and accelerate data accessibility across regulatory/licensing data and other critical Starlink data. RESPONSIBILITIES: Build tools and workflows that improve internal efficiency and collaboration, prevent internal data silos and process breakdowns, and scale with global growth Automate time-intensive, repetitive tasks to significantly reduce manual effort Fuse data from multiple unstructured sources to build organized, usable data repositories Create shared tools that enable self-service access, allowing teams to explore and analyze data independently Build and maintain high quality automation to assist with Starlink’s policy objectives and continued growth Work closely with Starlink Regulatory and partner teams to assess future needs and position the company accordingly BASIC QUALIFICATIONS: Bachelor’s degree in computer science, data science, physics, mathematics, or a STEM discipline 1+ years of experience in analytics, data science, data engineering, or software engineering 1+ years of experience in Python and SQL PREFERRED SKILLS AND EXPERIENCE: Understanding of database structures, schema design, query optimizations, ETL development Experience with Robotic Process Automation (RPA) Expertise in designing software systems Front-end experience in React, Angular, or similar JavaScript frameworks Experience working with in-stream data processing of structured, semi-structured, and unstructured data Experience creating and managing dashboards using data visualization tools (i.e. Tableau, Power BI) Exceptional ability to communicate technical concepts to non-technical audiences at all organizational levels Demonstrated ability to take on projects requiring initiative, development of new expertise, and full ownership from inception to completion ADDITIONAL REQUIREMENTS: This position is based in Bastrop, TX and requires being onsite - remote work not considered Willingness to travel up to 20% of the time Must be willing to work extended hours and/or weekends as needed This role may be subject to pre-employment drug and random drug and alcohol testing ITAR REQUIREMENTS: To conform to U.S. Government export regulations, applicant must be a (i) U.S. citizen or national, (ii) U.S. lawful, permanent resident (aka green card holder), (iii) Refugee under 8 U.S.C. § 1157, or (iv) Asylee under 8 U.S.C. § 1158, or be eligible to obtain the required authorizations from the U.S. Department of State. Learn more about the ITAR here .   SpaceX is an Equal Opportunity Employer; employment with SpaceX is governed on the basis of merit, competence and qualifications and will not be influenced in any manner by race, color, religion, gender, national origin/ethnicity, veteran status, disability status, age, sexual orientation, gender identity, marital status, mental or physical disability or any other legally protected status. Applicants wishing to view a copy of SpaceX’s Affirmative Action Plan for veterans and individuals with disabilities, or applicants requiring reasonable accommodation to the application/interview process should reach out to  EEOCompliance@spacex.com . 

Posted 30+ days ago

Audax Group logo
Audax GroupBoston, MA
Audax Group is a leading alternative investment manager with offices in Boston, New York, San Francisco, and London. Since its founding in 1999, the firm has raised over $40 billion in capital across its Private Equity and Private Debt businesses. With more than 400 employees and approximately 180 investment professionals, the firm is a leading capital partner for North American middle market companies. For more information, visit the Audax Group website www.audaxgroup.com . POSITION SUMMARY: Audax is looking for a dynamic, motivated Data Engineer Co-Op to join the business solutions team. This position will be responsible for supporting the company’s Senior Data Engineer with hands-on experience in data engineering, data governance, and data management. You will collaborate with the data team design, develop, and maintain data pipelines, ensure data quality and integrity, and contribute to the expansion of the company's data warehouse. RESPONSIBILITIES: Designing, implementing, maintaining and scaling the Audax data ecosystem Designing, implementing, maintaining and scaling Audax’s data ecosystem. Work with professionals to scale their data infrastructure within the company’s cloud enterprise data warehouse. Play an integral role in designing ETL processes and analytic pipeline for investment professionals. Assist with implementation of data governance policies, including data cataloging and data lineage. Provide data validation and data quality assessments. Identifying and resolving data anomalies. Technology Competencies Proficient in SQL and Python required. Familiarity in Java a plus. Familiarity in on-premise and cloud-based databases. Experience designing and implementing optimal data schemas. Knowledge of/Experience with implementing best practices in data governance REQUIREMENTS/QUALIFICATIONS: Currently pursuing a Bachelor’s or Master’s degree in Data Engineering, Analytics Engineering, Information Technology, Computer Science or another related field. Strong analytical and problem-solving skills. Previous Co-Op or full-time experience in related field is a plus. Experience taking large data sets, and cleaning/distilling these datasets. Experience with python a plus Experience with Alteryx and it’s advanced capabilities a plus. Strong written and oral communication skills required. Ability to interact in an efficient and meaningful way with all levels of the organization required. LOCATION: Boston, MA. Hybrid, in the office 3 days/week. These in-office requirements may be adjusted based on the needs of the business. Dates: January 2026 - June 2026 This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee. Duties, responsibilities and activities may change or new ones may be assigned at any time with or without notice. Audax Management Co. is an equal opportunity employer. Please note that Audax Group and its affiliated entities do not accept unsolicited resumes from a third-party recruiting agency not currently under a signed agreement. Any unsolicited resume that is sent to directly to Audax Group or one of its affiliated entities, or its employees, including those submitted to hiring managers by a third-party recruiting agency not currently under a signed agreement, will be considered property of Audax Group. If a third-party recruiting agency submits a resume without an agreement, Audax Group or its affiliated entities explicitly reserves the right to pursue and hire those candidate(s) without any financial obligation to the third-party recruiting agency. Any third-party recruiting agency should contact either a member of the Talent Acquisition or Human Resource team at Audax Group, in conjunction with a valid, fully executed contract for service based upon a specific job opening.

Posted 2 weeks ago

C logo
Credera Experienced Hiring Job BoardDallas, TX
We are looking for an enthusiastic GenAI and LLM Architect to add to Credera’s Data capability group. Our ideal candidate is excited about leading project-based teams in a client facing role to analyze large data sets to derive insights through machine learning (ML) and artificial intelligence (AI) techniques.  They have strong experience in data preparation and analysis using a variety of tools and programming techniques, building and implementing models, and creating and running simulations. The architect should be familiar with the deployment of enterprise scale models into a production environment; this includes leveraging full development lifecycle best practices for both cloud and on-prem solutions across a variety of use cases. You will act as the primary architect and technical lead on projects to scope and estimate work streams, architect and model technical solutions to meet business requirements and serve as a technical expert in client communications. On a typical day, you might expect to participate in design sessions, provision environments, and coach and lead junior resources on projects. WHO YOU ARE: Proven experience in the architecture, design, and implementation of large scale and enterprise grade AI/ML solutions 5+ years of hands-on statistical modeling and/or analytical experience in and industry or consulting setting Master’s degree in statistics, mathematics, computer science or related field (a PhD is preferred)  Experience with a variety of ML and AI techniques (e.g. multivariate/logistic regression models, cluster analysis, predictive modeling, neural networks, deep learning, pricing models, decision trees, ensemble methods, etc.) Proficiency in programming languages such as Python, TensorFlow, PyTorch, or Hugging Face Transformers for model development and experimentation Strong understanding of NLP fundamentals, including tokenization, word embeddings, language modeling, sequence labeling, and text generation Experience with data processing using LangChain, data embedding using LLMs, Vector databases and prompt engineering Advanced knowledge of relational and non-relational databases (SQL, NoSQL) Proficient in large-scale distributed systems (Hadoop, Spark, etc.) Experience with designing and presenting compelling insights using visualization tools (RShiny, R, Python, Tableau, Power BI, D3.js, etc.)  Passion for leading teams and providing both formal and informal mentorship Experience with wrangling, exploring, transforming, and analyzing datasets of varying size and complexity  Knowledgeable of tools and processes to monitor model performance and data quality, including model tuning experience Strong communication and interpersonal skills, and the ability to engage customers at a business level in addition to a technical level Stay current with AI/ML trends and research; be a thought leader in AI area Experience with implementing machine learning models in production environments through one or more cloud platforms:  Google Cloud Platform  Azure cloud services  AWS cloud services  Basic Qualifications Thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success Contribute in a team-oriented environment Prioritize multiple tasks in order to consistently meet deadlines Creatively solve problems in an analytical environment Adapt to new environments, people, technologies and processes Excel in leadership, communication, and interpersonal skills Establish strong work relationships with clients and team members Generate ideas and understand different points of view  Learn More Credera is a global consulting firm that combines transformational consulting capabilities, deep industry knowledge, and AI and technology expertise to deliver valuable customer experiences and accelerated growth across a broad range of industries worldwide. Our one-of-a-kind global boutique approach means we provide our clients with tailored solutions unique to their organization that can scale due to our extensive footprint. As a values-led organization, our mission is to make an extraordinary impact on our clients, our people, and our community. We believe it is this approach that has allowed us to work with and transform the most influential brands and organizations in the world, from strategy through to execution. More information is available at www.credera.com .  We are part of the OPMG Group of Companies, a division of Omnicom Group Inc. Hybrid Work Model: Our employees have the flexibility to work remotely two days per week. We expect our team members to spend 3 days per week in person with the flexibility to choose the days and times that work best for both them and their project or internal teams. This could be at a Credera office or at the client site. You'll work closely with your project team to align on how you balance both the flexibility that we want to provide with the connection of being together to produce amazing results for our clients. The why: We are passionate about growing our people both personally and professionally. Our philosophy is that in-person engagement is critical for our ability to develop deep relationships with our clients and our team members – it's how we earn trust, learn from others, and ultimately become better consultants and professionals. Travel : Our goal is to keep out-of-market travel to a minimum and most projects do not require significant travel. While certain projects can require frequent travel (up to 80% for a period of time), our average travel percentage over a year for team members is typically between 10-30%. We try to take a personal approach to travel. You will submit your travel preferences which our staffing teams will take into account when aligning you to a role. Credera will never ask for money up front and will not use apps such as Facebook Messenger, WhatsApp or Google Hangouts for communicating with you. You should be very wary of, and carefully scrutinize, any job opportunity that asks for money prior to starting and/or one where all communications take place exclusively via chat.

Posted 30+ days ago

C logo
Credera Experienced Hiring Job BoardHouston, TX
Credera is a global consulting firm that combines transformational consulting capabilities, deep industry knowledge, AI and technology expertise to deliver valuable customer experiences and accelerated growth across various industries. We continuously evolve our services to meet the needs of future organizations and reflect modern best practices. Our unique global approach provides tailored solutions, transforming the most influential brands and organizations worldwide.   Our employees, the lifeblood of our company, are passionate about making an extraordinary impact on our clients, colleagues, and communities. This passion drives how we spend our time, resources, and talents. Our commitment to our people and work has been recognized globally. Please visit our employer awards page:  https://www.credera.com/awards-and-recognition .   As an Architect in Credera’s Data capability group, you will lead teams in implementing modern data architecture, data engineering pipelines, and advanced analytical solutions. Our projects range from designing and implementing the latest data platform approaches (i.e. Lakehouse, DataOps, Data Mesh) using best practices and cloud solutions, building scalable data and ML pipelines, democratizing data through modern governance approaches, and delivering data products using advanced machine learning, visualization, and integration approaches. You will act as the primary architect and technical lead on projects to scope and estimate work streams, architect and model technical solutions to meet business requirements, serve as a technical expert in client communications, and mentor junior project team members. On a typical day, you might expect to participate in design sessions, build data structures for an enterprise data lake or statistical models for a machine learning algorithm, coach junior resources, and manage technical backlogs and release management tools. Additionally, you will seek out new business development opportunities at existing and new clients. WHO YOU ARE: You have a minimum of 5 years of technical, hands-on experience building, optimizing, and implementing data pipelines and architecture Experience leading teams to wrangle, explore, and analyze data to answer specific business questions and identify opportunities for improvement You are a highly driven professional and enjoy serving in a fast-paced, dynamic client-facing role where delivering solutions to exceed high expectations is a measure of success You have a passion for leading teams and providing both formal and informal mentorship You have strong communication and interpersonal skills, and the ability to engage customers at a business level in addition to a technical level You have a deep understanding of data governance and data privacy best practices You incorporate the usage of AI tooling, efficiencies, and code assistance tooling in your everyday workflows You have a degree in Computer Science, Computer Engineering, Engineering, Mathematics, Management Information Systems or a related field of study  The ideal candidate will have recent technical knowledge of the following: Programming languages (e.g. Python, Java, C++, Scala, etc.) SQL and NoSQL databases (MySQL, DynamoDB, CosmosDB, Cassandra, MongoDB, etc.)  Data pipeline and workflow management tools (Airflow, Dagster, AWS Step Functions, Azure Data Factory, etc.) Stream-processing systems (e.g. Storm, Spark-Streaming, Pulsar, Flink, etc.) Data Warehouse design (Databricks, Snowflake, Delta Lake, Lake formation, Iceberg) MLOps platforms (Sagemaker, Azure ML, Vertex.ai, MLFlow)  Container Orchestration (e.g. Kubernetes, Docker Swarm, etc.) Metadata management tools (Collibra, Atlas, DataHub, etc.)  Experience with the data platform components on one or more of the following cloud service providers: AWS Google Cloud Platform Azure Basic Qualifications Thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success Contribute in a team-oriented environment Prioritize multiple tasks in order to consistently meet deadlines Creatively solve problems in an analytical environment Adapt to new environments, people, technologies and processes Excel in leadership, communication, and interpersonal skills Establish strong work relationships with clients and team members Generate ideas and understand different points of view    Learn More: Credera is part of the Omnicom Precision Marketing Group (OPMG), a division of Omnicom Group Inc. OPMG is a global network of agencies that leverage data, technology, and CRM to create personalized and impactful customer experiences. OPMG offers a range of services, such as data-driven product / service design, technology strategy and implementation, CRM / loyalty strategy and activation, econometric and attribution modelling, technical and business consulting, and digital experience design and development.   Benefits: Credera provides a competitive salary and comprehensive benefits plan. Benefits include health, mental health, vision, dental, and life insurance, prescriptions, fertility and adoption benefits, community service days, paid parental leave, PTO, 14 paid holidays, matching 401(k), Healthcare & Dependent Flexible Spending Accounts, and disability benefits. For more information regarding Omnicom benefits, please visit www.omnicombenefits.com .    Hybrid Working Model : Our employees have the flexibility to work remotely two days a week. We expect team members to spend three days in person, with the freedom to choose the days and times that best suit them, their project, and their teams. You'll collaborate with your project team to balance flexibility with the benefits of in-person connection, delivering outstanding results for our clients.   The Why: In-person engagement is essential for building strong relationships with clients and colleagues. It fosters trust, encourages learning, and helps us grow as consultants and professionals.   Travel : For our consulting roles, o ur goal is to minimize travel , and most projects do not require extensive travel. While some projects may involve up to 80% travel for a period, the annual average for team members is typically 10%–30%. We take a personal approach to travel by considering your submitted preferences when assigning roles.   All qualified applicants will receive consideration for employment without regard to race, color, religion, gender identity, sexual orientation, national origin, age, genetic information, veteran status, or disability.   Credera will never ask for money up front and will not use apps such as Facebook Messenger, WhatsApp or Google Hangouts for communicating with you. You should be very wary of, and carefully scrutinize , any job opportunity that asks for money prior to starting and/or one where all communications take place exclusively via chat.   

Posted 30+ days ago

Dark Wolf Solutions logo
Dark Wolf SolutionsChantilly, VA
Dark Wolf Solutions  is seeking a highly motivated and experienced  Data Engineer to assist with strategic planning and oversee implementation of the cloud-based data environment. The work includes engaging regularly with data scientists, analysts, and managers. This role is located in Chantilly, VA. Responsibilities: Providing comprehensive support to analysts by delivering large datasets, methodologies, and impactful data visualizations to address critical intelligence needs. Aiding in the development and maintenance of a robust cloud-based data environment for efficient data transport, storage, ETL processes, and solution dissemination. Assisting in data engineering, cloud architecture design, and application development efforts, contributing to the overall success of projects. Engaging regularly with data scientists, analysts, and managers.  Assisting with strategic planning and oversee implementation of the cloud-based data environment, to include mapping of data sources and access controls. Developing code, data models, and documentation to standards; providing systems administration and programming support for ETL processes and data infrastructure efforts; and training and conducting knowledge transfer to team members on issues and technologies related to the ETL process, on premise high capacity compute cluster, and administrative duties.  Coordinating with external data and platform providers to ensure the smooth functioning of the systems and data flows, and to accomplish any needed changes and coordinate with experts to assist with technical aspects required to acquire new datasets or data management technologies for inclusion in the environment. Supporting the cross-domain transfer and integration of data. Assisting with strategic planning and oversee implementation of the cloud-based data environment, to include mapping of data sources and access controls. Developing code, data models, and documentation to standards. Providing systems administration and programming support for ETL processes and data infrastructure efforts. Training and conducting knowledge transfer to team members on issues and technologies related to the ETL process, on premise high capacity compute cluster, and administrative duties. Coordinating with external data and platform providers to ensure the smooth functioning of the systems and data flows, and to accomplish any needed changes. Coordinating with experts to assist with technical aspects required to acquire new datasets or data management technologies for inclusion in the environment. Supporting the cross-domain transfer and integration of data, to include using on-premises cluster, cloud environment, and SQL-based systems such as PostgreSQL and Impala. Required Qualifications: Effectively facilitates communication and collaboration as a technical liaison between system engineers, data engineers, data scientists, analysts, and non-technical managers/personnel. Proficient in utilizing AWS cloud services, including long-term storage solutions and cloud-based database services like Databricks and Elastic MapReduce (EMR). Experienced in designing and implementing SQL database structures and mappings between databases. Knowledgeable in network-attached storage (NAS) systems and their implementation. Skilled in creating and maintaining automated deployment scripts for streamlined software releases. Expert in managing and executing large-scale data migration projects, ensuring data integrity and minimal disruption. Hands-on experience with large-scale data compute and processing clusters such as Hadoop, optimizing performance and scalability. Adept at test-driven development within a secure on-premise cluster, selecting and employing the most efficient languages for the task, including Apache NiFi, Java, Python, and SQL. In-depth understanding of database architecture and performance design methodologies, providing system-tuning recommendations for technologies like Hadoop Hive, Apache NiFi, and Impala. Continuously improves and maintains the ETL process through the implementation and standardization of data flows using Apache NiFi and other ETL tools. US Citizen with an active Top Secret/Sensitive Compartmented Information (TS/SCI) security clearance with polygraph. Desired Qualifications: Possesses in-depth knowledge of the data environment and on-premises compute infrastructure. Proficient in storage and backup recovery systems, ensuring data availability and integrity. Experienced with Data Quality and Data Governance concepts, implementing best practices for data management. Skilled in System Administration and Linux Administration, maintaining system stability and performance. Adept at transforming data using High Capacity Compute (HCC) techniques to derive valuable insights. Experienced in administering system rights and responsibilities for the on-premises cluster and cloud infrastructure, ensuring secure access and resource allocation. This position is located in Chantilly, VA.   The estimated salary range for this position is $170,000.00 - $210,000.00, commensurate on Clearance, technical skillset and overall experience.    We are proud to be an EEO/AA employer Minorities/Women/Veterans/Disabled and other protected categories. In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire.

Posted 30+ days ago

Ridgeline logo
RidgelineNew York, NY
Principal Software Engineer, Data Persistence Location:  New York, NY Are you a systems-minded engineer obsessed with consistency, availability, and durability at scale? Do you enjoy working deep in the stack to build resilient, multi-tenant infrastructure that supports real-time mission-critical workloads? Are you excited to contribute to distributed systems that are performant, reliable, and secure—especially when it matters most? If so, we invite you to be a part of our innovative team. As a Principal Software Engineer on Ridgeline’s Data Persistence team, you will design, build, and evolve the core systems that underpin our data platform—ensuring reliability, performance, and elasticity across a complex, multi-region architecture. You’ll lead major architectural decisions, optimize distributed storage systems, and ensure compliance with strict regulatory standards like SEC and SOC2. This is a high-impact role on a small, specialized team where your work directly enables critical business operations for investment managers. You’ll leverage cutting-edge technologies—including AI-powered tools like GitHub Copilot and ChatGPT—to accelerate development and drive high-quality outcomes at scale. You must be work authorized in the United States without the need for employer sponsorship.   What will you do? Design and evolve our distributed database architecture, including storage engines, query layers, and consistency models. Evaluate and optimize write/read paths, indexing strategies, replication mechanisms, and failover recovery techniques. Lead the strategic roadmap for how we scale our multi-tenant, microservice-based architecture while ensuring strong guarantees (consistency, availability, durability). Partner with product, SRE, and platform teams to shape the future of our persistence, observability, and data access patterns. Optimize database infrastructure for cost-efficiency, balancing performance and scalability to improve platform margins at scale. Mentor senior engineers and serve as a thought leader across the organization. Desired Skills and Experience 15+ years of experience in software engineering with a strong focus on database systems. Have authored or deeply contributed to high-performance distributed systems, databases, or storage engines. Possess deep fluency in CAP theorem tradeoffs, Raft/Paxos, LSM vs B-Tree internals, compaction strategies, and query execution plans. Have production experience scaling systems that handle TBs–PBs of data across multiple regions or data centers. Are comfortable navigating between practical tradeoffs and theoretical foundations in your technical decision-making. Can clearly articulate complex systems to diverse audiences and influence engineering direction across orgs. Strong hands-on experience with AWS cloud-native architectures, including services like Aurora Global Database, S3, Route53, and lambda. Strong experience with database observability tools like Dataddog DBM or equivalent. Proficiency in at least one programming language (Kotlin, Java, Python, TypeScript).  Nice to Have Contributions to open-source database technologies (e.g., PostgreSQL, ClickHouse, RocksDB). Experience with hybrid transactional/analytical processing (HTAP) systems or stream processing architectures. Familiarity with emerging trends like vector databases, CRDTs, or columnar storage engines. Author, speaker, or deep contributor to technical books, blogs, or conference talks.   About Ridgeline Ridgeline is the industry cloud platform for investment management. It was founded in 2017 by visionary entrepreneur Dave Duffield (co-founder of both PeopleSoft and Workday) to address the unique technology challenges of an industry in need of new thinking. We are building a modern platform in the public cloud, purpose-built for the investment management industry to empower business like never before.  Headquartered in Lake Tahoe with offices in Reno, Manhattan, and the Bay Area, Ridgeline is proud to have built a fast-growing, people-first company that has been recognized by Fast Company as a “Best Workplace for Innovators,” by LinkedIn as a “Top U.S. Startup,” and by The Software Report as a “Top 100 Software Company.” Ridgeline is proud to be a community-minded, discrimination-free equal opportunity workplace. Ridgeline processes the information you submit in connection with your application in accordance with the Ridgeline Applicant Privacy Statement . Please review the Ridgeline Applicant Privacy Statement in full to understand our privacy practices and contact us with any questions. Compensation and Benefits    The typical starting salary range for new hires in this role is targeted at $255,000 - $300,000. Final compensation amounts are determined by multiple factors, including candidate experience and expertise, and may vary from the amount listed above.  As an employee at Ridgeline, you’ll have many opportunities for advancement in your career and can make a true impact on the product.  In addition to the base salary, 100% of Ridgeline employees can participate in our Company Stock Plan subject to the applicable Stock Option Agreement. We also offer rich benefits that reflect the kind of organization we want to be: one in which our employees feel valued and are inspired to bring their best selves to work. These include unlimited vacation, educational and wellness reimbursements, and $0 cost employee insurance plans. Please check out our  Careers page for a more comprehensive overview of our perks and benefits.   #LI-Hybrid  

Posted 30+ days ago

Ridgeline logo
RidgelineReno, NV
Staff Software Engineer, Custodian Data Location:  Reno, NV, San Ramon, CA Do you have a passion for finance & investing? Are you interested in modeling the industry’s data and making it highly available? Are you a technical leader who enjoys refining both technology performance and team collaboration?  If so, we invite you to join our innovative team. As a Ridgeline Staff Software Engineer on our Custodian Data team, you’ll have the unique opportunity to build an industry-defining, fast, scalable custodian engine with full asset class support and global market coverage. You will be relied on for your technical leadership to help the team evolve our architecture, scale to meet our growth opportunity and exemplify software engineering best practices. Our team of engineers are building with cutting-edge technologies—including AI tools like GitHub Copilot and ChatGPT- in a fast-moving, creative, progressive work environment. You’ll be encouraged to think outside the box, bringing your own vision, passion, and insights to drive advancements that impact both our team and the industry. Our team is committed to creating a lasting impact on the investment management industry, leveraging AI and leading development practices to bring transformative change. You must be work authorized in the United States without the need for employer sponsorship. What you will do? Contribute domain knowledge, design skills, and technical expertise to a team where design, product, and engineering collaborate closely Be involved in the entire software development process, from requirements and design reviews to shipping code and observing how it lands with our customers. Impact a developing tech stack based on AWS back-end services Participate in the creation and construction of developer-based automation that leads to scalable, high-quality applications customers will depend on to run their businesses Coach, mentor, and inspire teams of product engineers who are responsible for delivering high-performing, secure enterprise applications Think creatively, own problems, seek solutions, and communicate clearly along the way Contribute to a collaborative environment deeply rooted in learning, teaching, and transparency Desired Skills and Experience 8+ years in a software engineering position with a history of architecting and designing new products and technologies Experience building cloud-native applications on AWS/Azure/Google Cloud A degree in Computer Science, Information Science, or a related discipline Extensive experience in Java or Kotlin Experience with API and Event design Background in high-availability systems Experience with L2, and L3 Support and participation in on-call rotation.  Experience with production instrumentation, observability, and performance monitoring Willingness to learn about new technologies while simultaneously developing expertise in a business domain/problem space Understand the value of automated tests at all levels  Ability to focus on short-term deliverables while maintaining a big-picture long-term perspective Serious interest in having fun at work Bonus : 3+ years of experience engineering in Data Pipeline, Reconciliation,  Market Data, or other Fintech applications Understanding of AWS services and infrastructure Experience with Docker or containerization Experience with agile development methodologies Experience with React Experience with caching Experience with data modeling Experience leading difficult technical projects that take multiple people and teams to complete Ability to handle multiple projects and prioritize effectively Excellent communication skills, both written and verbal Willingness to learn about cutting-edge technologies while cultivating expertise in a business domain/problem space An aptitude for problem-solving Ability to amplify the ideas of others Responsibility for delivering an excellent project that extends beyond coding Ability to adapt to a fast-paced and changing environment Compensation and Benefits  The typical starting salary range for new hires in this role is $174,000 - $220,000. Final compensation amounts are determined by multiple factors, including candidate experience and expertise, and may vary from the amount listed above.  As an employee at Ridgeline, you’ll have many opportunities for advancement in your career and can make a true impact on the product.  In addition to the base salary, 100% of Ridgeline employees can participate in our Company Stock Plan subject to the applicable Stock Option Agreement. We also offer rich benefits that reflect the kind of organization we want to be: one in which our employees feel valued and are inspired to bring their best selves to work. These include unlimited vacation, educational and wellness reimbursements, and $0 cost employee insurance plans. Please check out our  Careers page for a more comprehensive overview of our perks and benefits. About Ridgeline Ridgeline is the industry cloud platform for investment management. It was founded in 2017 by visionary entrepreneur Dave Duffield (co-founder of both PeopleSoft and Workday) to address the unique technology challenges of an industry in need of new thinking. We are building a modern platform in the public cloud, purpose-built for the investment management industry to empower businesses like never before.  Headquartered in Lake Tahoe with offices in Reno, Manhattan, and the Bay Area, Ridgeline is proud to have built a fast-growing, people-first company that has been recognized by Fast Company as a “Best Workplace for Innovators,” by LinkedIn as a “Top U.S. Startup,” and by The Software Report as a “Top 100 Software Company.” Ridgeline is proud to be a community-minded, discrimination-free equal opportunity workplace. Ridgeline processes the information you submit in connection with your application in accordance with the Ridgeline Applicant Privacy Statement . Please review the Ridgeline Applicant Privacy Statement in full to understand our privacy practices and contact us with any questions.  

Posted 30+ days ago

D logo
Dynamis, Inc.Huntsville, AL
The Data Architect for the DeCPTR-Nuclear project is responsible for designing and implementing a secure, centralized data architecture essential for nuclear radiation survivability testing. This role involves creating a robust framework that ensures efficient storage, retrieval, and management of test data, in compliance with ISO 9001 standards and MDA guidance. The Data Architect will collaborate with Data Scientists and Information Systems (IS) Business Analysts to ensure seamless data integration, accessibility, and analysis, supporting the project's strategic objectives and advancement.  Responsibilities: Data Architecture Design: Develop and maintain a scalable data architecture framework that supports the project's data management needs.  Infrastructure Implementation: Oversee the implementation of data storage solutions, ensuring they are secure, efficient, and compliant with industry standards.  Collaboration with Data Scientist: Work closely with Data Scientist to ensure that the architecture supports advanced data analysis and modeling, providing the necessary infrastructure for data-driven insights.  Collaboration with IS Business Analyst: Partner with IS Business Analyst to design information systems that facilitate data flow, integration, and accessibility across varied platforms and stakeholders.  Data Integrity and Security: Implement best practices for data integrity, security, and compliance, conducting regular audits and updates as necessary.  Optimization: Continuously assess and optimize the data architecture to enhance performance and support evolving project requirements.  Requirements: U.S. Citizenship required Bachelor’s Degree required in Computer Science, Information Technology, Data Science, or a related field.  A minimum of 5-8 years of experience in data management, database design, or IT infrastructure, preferably within the defense or aerospace sectors. Proficiency in database technologies (e.g., SQL, NoSQL), data modeling, architectures, cloud services, and big data technologies. Ability to design data models and architectures that support business needs, ensuring data integrity and accessibility.  Certifications  Certified Data Management Professional (CDMP)  AWS Certified Solutions Architect or similar cloud platform certifications  Preferred: Technical Expertise: Strong understanding of database management systems, data warehousing, and ETL (Extract, Transform, Load) processes. Proficiency in cloud services and big data technologies.  Analytical Skills: Ability to design data models and architectures that support business needs, ensuring data integrity and accessibility.  Communication Skills: Excellent ability to communicate complex technical ideas to both technical and non-technical stakeholders.  Problem-Solving Skills: Proficient in troubleshooting complex data issues and designing scalable, efficient data solutions.  Project Management: Experience with project management methodologies and tools, including Agile or Lean practices.  Compliance: Familiarity with ISO 9001 quality management standards and DoD regulatory requirements related to data management. 

Posted 30+ days ago

Critical Mass logo
Critical MassSan Jose, CA
As an Associate Data Engineer, you will support the team in setting up, monitoring, and managing multi-channel marketing campaigns, and assist with data manipulation and analysis tasks. You will learn to design and maintain data solutions that help optimize processes and automate tasks, contributing to the success of our clients’ campaigns.   Your will Assisting in creating basic segments using SQL for multi-channel campaigns. Supporting the automation of manual processes under supervision using SQL and Python. Helping monitor campaigns to identify and report issues. Assisting in analyzing documentation and learning to extract data using APIs with team guidance. Collaborating with the Marketing Sciences and development teams to support campaign tracking and code deployment. Assisting in campaign implementation and monitoring on platforms such as Adobe Campaign Classic, Salesforce Marketing Cloud, Adobe Journey Optimizer, Pega, or Braze. Participating in meetings with agencies, vendors, and clients to understand requirements and support campaign execution. Communicating basic progress and results to internal teams. Supporting data tagging and organization to improve future analysis. Staying up to date with basic knowledge of digital marketing and data analysis, with a strong willingness to learn continuously.   You have Advanced English proficiency (At least B2+). 1+ years of experience using SQL, Python, or working with APIs for data extraction. 1+ years of experience designing ETL solutions. Knowledge of cloud computing technologies like Salesforce, Databricks, Azure, Snowflake, and AWS. Strong analytical mindset with exceptional reporting skills and attention to detail. Experience with HTML, CSS, SSJS, and personalization in campaign tools. Excellent communication skills with experience working in cross-functional teams across different organizations. Proven experience in campaign management on platforms like Salesforce Marketing Cloud, Adobe Experience Platform, Pega, or Braze is a PLUS. Strong project management and customer relationship skills. Demonstrated ability to quickly learn and apply knowledge across a wide range of subjects.   What We Offer: Maternity and parental leave extra days Competitive benefits packages Vacation, compassionate leave, sick days, and flex days Access to online services for families and new parents Diversity and Inclusion Board with 12 affinity groups Internal learning and development programs Enterprise-wide employee discounts And more… At Critical Mass, we value our employees and offer competitive compensation and benefits packages.  If you’re looking for a challenging and rewarding opportunity to make a significant impact on the lives of our employees, we encourage you to apply for this exciting position today! The Talent Team at Critical Mass is focused on ensuring we provide the best training, onboarding, and employee experience possible! Our new hires & employees are the future of our organization, and we want to set you up for long-term success. In an effort to do so, we expect our team to work from an office a minimum of 3 days a week . The ask stems from our want to: Strengthen opportunity for continuous learning Improve collaboration and team relationships. Increase employee engagement This work model balances the need for individual flexibility while maintaining the relentless customer focus we provide at CM. We understand that not everyone may feel comfortable with this expectation, so we ask that you please let us know immediately if there are any concerns so we can help navigate accordingly. Critical Mass is an equal opportunity employer.  The Critical Mass Talent Acquisition team will only communicate from email addresses that use the URLs criticalmass.com, omc.com   and   us.greenhouse-mail.io . We will not use apps such as Facebook Messenger, WhatsApp, or Google Hangouts for communicating with you. We will never ask you to send us money, technology, or anything else to work for our company. If you believe you are the victim of a scam, please review your local government consumer protections guidance and reach out to them directly. If U.S. based:   https://www.consumer.ftc.gov/articles/job-scams#avoid If Canada based:   https://www.canada.ca/en/services/finance/consumer-affairs.html If U.K. based:   https://www.gov.uk/consumer-protection-rights If Costa Rica based: https://www.consumo.go.cr/educacion_consumidor/consejos_practicos.aspx  

Posted 30+ days ago

Govini logo

Senior Data Scientist

GoviniPittsburgh, PA

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

Company Description

Govini transforms Defense Acquisition from an outdated manual process to a software-driven strategic advantage for the United States. Our flagship product, Ark, supports Supply Chain, Science and Technology, Production, Sustainment, and Modernization teams with AI-enabled Applications and best-in-class data to more rapidly imagine, develop, and field the capabilities we need. Today, the national security community and every branch of the military rely on Govini to enable faster and more informed Acquisition decisions.

Job Description

We are seeking an inquisitive data scientist to join our team and work with our various datasets to find connections, knowledge, and potential issues in order to help our government clients make more informed decisions.  Your role on the Data Science team will be primarily focused on our large datasets, and you will design and implement scalable statistical systems based on your analysis. You will have the opportunity to expand and grow data-driven research across Govini, and lead new areas to apply advanced analytics to drive business results. As part of our team, you must be a data nerd with a strong understanding of the various fundamentals of data science and analysis and know how to bring data to life.

In order to do this job well, you must be a highly organized problem-solver and possess excellent oral and written communication skills. You are independent, driven, and motivated to jump in and roll up your sleeves to get the job done. You lead by influence and motivation. You have a passion for great work and nothing less than your best will do. You share our intolerance of mediocrity. You’re uber-smart, challenged by figuring things out and producing simple solutions to complex problems. Knowing there are always multiple answers to a problem, you know how to engage in a constructive dialogue to find the best path forward. You’re scrappy. We like scrappy. We need a creative, out-of-the-box thinker who shares our passion and obsession with quality. 

This role is a full-time position located out of our office in Pittsburgh, PA. 
This role may require up to 25% travel

Scope of Responsibilities

  • Design experiments, test hypotheses, and build models for advanced data analysis and complex algorithms
  • Apply advanced statistical and predictive modeling techniques to build, maintain, and improve multiple real-time decision systems 
  • Make strategic recommendations on data collection, integration, and retention requirements, incorporating business requirements and knowledge of data industry best practices 
  • Model and frame business scenarios that are meaningful and which impact critical processes and decisions; transform, standardize, and integrate datasets for client use cases
  • Convert custom, complex and manual client data analysis tasks into repeatable, configurable processes for consistent and scalable use within the Govini SaaS platform
  • Optimize processes for maximum speed, performance, and accuracy; craft clean, testable, and maintainable code
  • Partner with internal Govini business analysts and external client teams to seek out the best solutions regarding data-driven problem solving
  • Participate in end-to-end software development, on an agile team in a scrum process, collaborating closely with fellow software, machine learning, data, and QA engineers

Qualifications

  • US Citizenship is Required

Required Skills:
  • Bachelor's degree in Computer Science, Computer Engineering, Mathematics, Statistics, or a related field; Master’s or PhD preferred
  • 7 + years of hands-on data science experience
  • 7+ years deriving key insights and KPIs for external and internal customers
  • Regular development experience in Python
  • Prior hands-on experience working with data-driven analytics 
  • Proven ability to develop solutions to loosely defined business problems by leveraging pattern detection over large datasets 
  • Proficiency in statistical analysis, quantitative analytics, forecasting/predictive analytics, multivariate testing, and optimization algorithms 
  • Experience using machine learning algorithms (e.g., gradient-boosted machines, neural networks)
  • Ability to work independently with little supervision
  • Strong communication and interpersonal skills
  • A burning desire to work in a challenging fast-paced environment

Desired Skills:
  • Current possession of a U.S. security clearance, or the ability to obtain one with our sponsorship
  • Experience in or exposure to the nuances of a startup or other entrepreneurial environment
  • Experience working on agile/scrum teams
  • Experience building datasets from common database tools using flavors of SQL
  • Expertise with automation and streaming data
  • Experience with major NLP frameworks (spaCy, fasttext, BERT)Familiarity with big data frameworks (e.g., HDFS, Spark) and AWS
  • Familiarity with Git source control management
  • Experience working in a product organization
  • Experience analyzing financial, supply chain/logistics, or intellectual property data

We firmly believe that past performance is the best indicator of future performance.  If you thrive while building solutions to complex problems, are a self-starter, and are passionate about making an impact in global security, we’re eager to hear from you.

Govini is an Equal Opportunity Employer.  All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status or any other characteristic protected by law.

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall