landing_page-logo
  1. Home
  2. »All Job Categories
  3. »Data Science Jobs

Auto-apply to these data science jobs

We've scanned millions of jobs. Simply select your favorites, and we can fill out the applications for you.

Principal, Software Engineer - Data & Enablement-logo
Torc RoboticsAnn Arbor, MI
About the Company At Torc, we have always believed that autonomous vehicle technology will transform how we travel, move freight, and do business. A leader in autonomous driving since 2007, Torc has spent over a decade commercializing our solutions with experienced partners.  Now a part of the Daimler family , we are focused solely on developing software for automated trucks to transform how the world moves freight. Join us and catapult your career with the company that helped pioneer autonomous technology, and the first AV software company with the vision to partner directly with a truck manufacturer. Meet The Team:  TORC is looking for an experienced principal engineer to serve as the architect and pragmatic visionary for data ecosystem for autonomous vehicle technology. This role will play a pivotal role in the success of the organization and comes with high visibility, responsibility, and technical impact.  This person must have a strong technical foundation in building and delivering high-performance, resilient, and scalable cloud computing solutions and will bring strategic insights, technical acumen, mentorship and facilitate collaboration across the organization.   What You'll Do:  Drive and grow the strategic technical vision for the Data & Enablement Division--responsible for foundational services and systems used by engineers and customers to build and support our autonomous trucking platform. Work within Torc’s Principal Community to mature our technical vision and drive technical direction across the organization. Collaborate with stakeholders to understand requirements and design scalable and maintainable software solutions that support the broader Technology organization. Be a role-model and set the standards of highest-level technical excellence and rigor within the Data & Enablement Division   Provide technical leadership and guidance to engineering teams in the Data & Enablement Division. Participate in design and code reviews, providing constructive feedback to ensure high-quality solutions that adhere to established standards and practices. Mentor and guide division engineers, assisting in their technical growth and fostering a culture of learning and development within the division. Troubleshoot and debug the most critical issues, determining the root causes, implementing appropriate solutions, and setting up safeguards against recurrences. Be able to analyze, and mentor others to analyze, system performance to implement necessary optimizations to enhance speed, efficiency, and scalability. Participate in project planning and collaborate with technical product managers on the priorities and customer expectations of the proposed software solutions. Stays up to date with the latest industry trends, technologies, and best practices for potential integration with existing solutions.   What You'll Need to Succeed: Bachelor’s Degree in Computer Science, Robotics, Electrical Engineering or related technical field plus demonstrates competences and technical proficiencies typically acquired through 20+ years of experience OR Master’s Degree in Computer Science, Robotics, Electrical Engineering or related technical field plus demonstrates competences and technical proficiencies typically acquired through 10+ years of experience  Ten-plus years of experience building and maintaining workloads in public cloud environments Strong technical communication skills, written and verbal, that scale to a diverse workforce Strong problem-solving skills with the ability to analyze and understand complex software system issues and evolving technical challenges Strong proficiency in Python and a commitment to test-driven development patterns, continuous integration and delivery, and infrastructure as code Strong ability to align technical objectives to business values and articulate the associated business value of technical work Strong time management and organization skills to plan, develop, prioritize effectively, and maintain competing demands simultaneously with frequent interruptions and in fast-paced environment Ability to facilitate and drive collaborative engagement across technical teams in a large engineering organization in person and virtually Knowledge of AWS serverless architectures (Lambda, Batch, ECS Fargate, Glue, Athena) is preferred Familiarity with robotics and advanced driver assistance systems is preferred Experience developing data warehousing, data lake or data mesh solutions is preferred Experience scaling software and infrastructure architectures for simulation environments is preferred   Perks of Being a Full-time Torc’r Torc cares about our team members and we strive to provide benefits and resources to support their health, work/life balance, and future. Our culture is collaborative, energetic, and team focused. Torc offers: A competitive compensation package that includes a bonus component and stock options 100% paid medical, dental, and vision premiums for full-time employees 401K plan with a 6% employer match Flexibility in schedule and generous paid vacation (available immediately after start date) Company-wide holiday office closures AD+D and Life Insurance Hiring Range for Job Opening  US Pay Range $226,400 — $271,700 USD At Torc, we’re committed to building a diverse and inclusive workplace. We celebrate the uniqueness of our Torc’rs and do not discriminate based on race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, veteran status, or disabilities. Even if you don’t meet 100% of the qualifications listed for this opportunity, we encourage you to apply. 

Posted 3 weeks ago

Data Center Technician (On-site Atlanta)-logo
VultrAtlanta, GA
Who We Are Vultr is on a mission to make high-performance cloud infrastructure easy to use, affordable, and locally accessible for enterprises and AI innovators around the world.  With 32 cloud data center locations around the world, Vultr is trusted by hundreds of thousands of active customers across 185 countries for its flexible, scalable, global Cloud Compute, Cloud GPU, Bare Metal, and Cloud Storage solutions. Founded by David Aninowsky and self-funded for over a decade, Vultr has grown to become the world’s largest privately-held cloud infrastructure company. Vultr Cares Excellent Medical Benefits w/ 100% company-paid premiums for employee only plan + 100% company-paid dental & vision premiums 401(k) plan that matches 100% up to 4% with immediate vesting Professional Development Reimbursement of $2,500 each year 11 Holidays + Paid Time Off Accrual + Rollover Plan + take your birthday off Commitment matters to Vultr! Increased PTO at 3 year & 10 year anniversary + 1 month paid sabbatical every 5 years + Anniversary Bonus each year $500 first year remote office setup + $400 each following year for new equipment Internet reimbursement up to $75 per month Gym membership reimbursement up to $50 per month Company-paid Wellable subscription   Join Vultr: As a datacenter technician on a day to day basis, you will be responsible for handling incoming data center support requests.   What to expect:   Installing and performing on-going maintenance on servers and network equipment. Including rack & stack of servers and switches, cabling, and physical configuration of devices. Responding to reported server, network, and infrastructure issues. Work with facility staff to ensure that power, cooling, and all other facility provided services are working properly; coordinate any maintenance, and ensure that any outages are addressed and escalated accordingly. Running hardware diagnostics and replacing failing parts in a timely manner. Work with vendor warranty technicians to ensure that any warranty issues are resolved promptly and properly.  Proactively identify issues and areas to approve efficiency and develop plans to resolve.  Setup, maintain, and document spare parts inventory.  Collaborating with software and network engineering teams on cybersecurity and network efficiency. Upgrading internal system components, including CPUs, memory, hard drives, and network cables. Maintaining detailed documentation of infrastructure, work, and physical parts inventory. Escalating issues as needed to ensure prompt resolution   Our new team member will need: 5+ years experience working in a datacenter as a technician or similar role installing, configuring, and troubleshooting server and network equipment. Ability to work independently to manage projects within the facility Firm understanding of servers, network equipment, and datacenter facility services like power, cooling, and cross connections to service providers. Experience with troubleshooting, building, repairing, and upgrading servers Strong organizational skills Superb communication skills   Compensation $65,000 - 75,000 This salary can vary based on location, years of experience, background and skill set. Vultr is committed to an inclusive workforce where diversity is celebrated and supported. All employment decisions at Vultr are based on business needs, job requirements, and individual qualifications. Vultr regards the lawful and correct use of personal information as important to the accomplishment of our objectives, to the success of our operations and to maintaining confidence between those with whom we deal and ourselves. As such the use of various key privacy controls enables Vultr’s treatment of personal information to meet current regulatory guidelines and laws. Workforce members have the right under US state law where and when applicable and certain other privacy and data protection laws, as applicable, to: fair and equal treatment, knowing what personal data we gather and retain, for what purpose, and the ability to access and/or delete such data. You also have the right to opt out of communications from Vultr and approved third- parties at any time.

Posted 30+ days ago

Sr. Data Reliability Engineer I-logo
DoubleVerifyNew York, NY
Sr. Data Reliability Engineer I Location - New York, NY Hybrid Model - 3x per week Who we are DoubleVerify is the leading independent provider of marketing measurement software, data and analytics that authenticates the quality and effectiveness of digital media for the world's largest brands and media platforms. DV provides media transparency and accountability to deliver the highest level of impression quality for maximum advertising performance. Since 2008, DV has helped hundreds of Fortune 500 companies gain the most from their media spend by delivering best-in-class solutions across the digital ecosystem, helping to build a better industry. Learn more at  www.doubleverify.com . Position Overview: Sr Data Reliabiltiy Engineer I is an integral part of the Data Reliability (DRE) Team, responsible for analyzing and externalizing DoubleVerify’s data internally as well as monitoring, troubleshooting, and improving the various company’s data pipelines and technologies. Responsibilities: You will gain in-depth knowledge of how data is collected, processed, and externalized to clients within DoubleVerify’s architecture You will script in Python and SQL extensively You will work with data analysis tools such as Splunk/Grafana to create reports and data visualization You will work with Databricks, BigQuery, Snowflake, OLTP, MongoDB, etc You will work with Kubernetes, Docker, Terraform, Helm charts, etc You will be thrilled at the prospect of building strong relationships with different teams in the company, solving operational issues, and implementing quality improvements You will be part of the on-call rotation Requirements: Bachelor's degree in CS or equivalent experience. Degree in a technical field preferred 3+ years of experience writing Advanced SQL and scripting languages Python, bash, etc 3+ years of Linux experience 3+ years of experience working with SQL/NoSQL Databases and data warehouses such as Databricks, BigQuery, Snowflake, MongoDB, etc. Knowledge of Cloud computing fundamentals and experience working with public cloud providers such as GCP, AWS, Azure, etc. Experience working with Github, CI/CD, GitLab or other automation/delivery tools Good understanding of BI and Data Warehousing concepts (ETL, OLAP vs. OLTP, Slowly Changing Dimensions) Demonstrated ability to adapt quickly, learn new skill sets, and be able to understand operational challenges Strong analytical, problem-solving, negotiation, and organizational skills with a clear focus under pressure Must be proactive with a proven ability to execute multiple tasks simultaneously Excellent interpersonal skills, including relationship building with diverse, global, cross-functional team Good understanding of process automation Nice to have: Previous experience in AdTech is a plus Experience working with Kubernetes, Docker, Terraform, Helm charts, and fundamentals of DevOps The successful candidate’s starting salary will be determined based on a number of non-discriminating factors, including qualifications for the role, level, skills, experience, location, and balancing internal equity relative to peers at DV. The estimated salary range for this role based on the qualifications set forth in the job description is between $76,000 - $171,000. This role will also be eligible for bonus/commission (as applicable), equity, and benefits. The range above is for the expectations as laid out in the job description; however, we are often open to a wide variety of profiles and recognize that the person we hire may be more or less experienced than this job description as posted. Don’t meet every single requirement? Studies have shown that women and people of color are less likely to apply to jobs unless they meet every single qualification. At DoubleVerify we are dedicated to building a diverse, inclusive, and authentic workplace, so if you’re excited about this role but your past experience doesn’t align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right candidate for this or other roles!  

Posted 4 weeks ago

Data Engineer-logo
Jun GroupNew York, NY
Jun Group is a technology company building a world where consumers are in control of their data and advertisers can reach them directly. Intelligent advertising that inspires trust is our guiding principle. We’re passionate about making advertising better for everyone through our consent-based approach that empowers the world’s largest publishers, brands, and agencies to achieve their goals with integrity, transparency, and peace-of-mind. We are looking for a Data Engineer to join our engineering team to help us manage our diverse and growing set of initiatives. This position is full-time with the option of working on-site from our NYC headquarters. Jun Group will only consider candidates for this position who are currently legally authorized to work in the United States. Responsibilities include Collaborate with our Product and Strategy teams to build customized reporting solutions for our clients Integrate new internal data sources as well as third-party APIs to enhance our reporting capabilities Streamline processes to allow for rapid prototyping to meet the reporting needs of new clients Work with our DevOps team to create monitoring which ensures data integrity Experiment with new tech to find the right tool for the job Use Kanban to manage multiple releases per week Maintain high code quality through code reviews and automated tests Here are a few indicators that you're the right person You enjoy a fun, creative, and engaging working atmosphere free of brilliant jerks You want to be part of a small team inside a large company with massive opportunity for growth You enjoy collaboration with other teams including product, biz dev, and our in-house QA team You eagerly dig into complex engineering problems Preferred skills and experience 2+ years of relevant work experience You have built customer–facing dashboards and reports using data visualization tools like Holistics, Looker, or Tableau You are comfortable writing SQL and manipulating large structured or unstructured datasets for analysis You’ve designed schemas and optimized queries for a data warehouse like BigQuery or ClickHouse You’ve built and maintained batch ETL/ELT jobs using an orchestration platform like Airflow, Temporal, or Dagster Familiarity with Google Cloud or AWS big data products Some company benefits include Competitive Pay Work Life Balance & Hybrid Work Life Health, Dental, and Vision Insurance  Mental Health Resources Volunteer Opportunities Greater NY-area Residents: We currently have a hybrid remote work policy. All Jun Group employees living within a 90-minute (one way) commute of our NYC office are expected to be in the office three days per week. Salary Range: $80,000 - $100,000, plus incentive pay We’re open to allowing the right person to learn our industry on the job. We welcome diversity and non-traditional paths into all of our roles. We believe in hiring the right person as opposed to the right combination of keywords. Communications regarding your application will only come from @ jungroup.com  or @ hyprmx.com email addresses. 

Posted 3 weeks ago

C
Credera Experienced Hiring Job BoardDallas, TX
We are looking for an enthusiastic GenAI and LLM Architect to add to Credera’s Data capability group. Our ideal candidate is excited about leading project-based teams in a client facing role to analyze large data sets to derive insights through machine learning (ML) and artificial intelligence (AI) techniques.  They have strong experience in data preparation and analysis using a variety of tools and programming techniques, building and implementing models, and creating and running simulations. The architect should be familiar with the deployment of enterprise scale models into a production environment; this includes leveraging full development lifecycle best practices for both cloud and on-prem solutions across a variety of use cases. You will act as the primary architect and technical lead on projects to scope and estimate work streams, architect and model technical solutions to meet business requirements and serve as a technical expert in client communications. On a typical day, you might expect to participate in design sessions, provision environments, and coach and lead junior resources on projects. WHO YOU ARE: Proven experience in the architecture, design, and implementation of large scale and enterprise grade AI/ML solutions 5+ years of hands-on statistical modeling and/or analytical experience in and industry or consulting setting Master’s degree in statistics, mathematics, computer science or related field (a PhD is preferred)  Experience with a variety of ML and AI techniques (e.g. multivariate/logistic regression models, cluster analysis, predictive modeling, neural networks, deep learning, pricing models, decision trees, ensemble methods, etc.) Proficiency in programming languages such as Python, TensorFlow, PyTorch, or Hugging Face Transformers for model development and experimentation Strong understanding of NLP fundamentals, including tokenization, word embeddings, language modeling, sequence labeling, and text generation Experience with data processing using LangChain, data embedding using LLMs, Vector databases and prompt engineering Advanced knowledge of relational and non-relational databases (SQL, NoSQL) Proficient in large-scale distributed systems (Hadoop, Spark, etc.) Experience with designing and presenting compelling insights using visualization tools (RShiny, R, Python, Tableau, Power BI, D3.js, etc.)  Passion for leading teams and providing both formal and informal mentorship Experience with wrangling, exploring, transforming, and analyzing datasets of varying size and complexity  Knowledgeable of tools and processes to monitor model performance and data quality, including model tuning experience Strong communication and interpersonal skills, and the ability to engage customers at a business level in addition to a technical level Stay current with AI/ML trends and research; be a thought leader in AI area Experience with implementing machine learning models in production environments through one or more cloud platforms:  Google Cloud Platform  Azure cloud services  AWS cloud services  Basic Qualifications Thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success Contribute in a team-oriented environment Prioritize multiple tasks in order to consistently meet deadlines Creatively solve problems in an analytical environment Adapt to new environments, people, technologies and processes Excel in leadership, communication, and interpersonal skills Establish strong work relationships with clients and team members Generate ideas and understand different points of view  Learn More Credera is a global consulting firm that combines transformational consulting capabilities, deep industry knowledge, and AI and technology expertise to deliver valuable customer experiences and accelerated growth across a broad range of industries worldwide. Our one-of-a-kind global boutique approach means we provide our clients with tailored solutions unique to their organization that can scale due to our extensive footprint. As a values-led organization, our mission is to make an extraordinary impact on our clients, our people, and our community. We believe it is this approach that has allowed us to work with and transform the most influential brands and organizations in the world, from strategy through to execution. More information is available at www.credera.com .  We are part of the OPMG Group of Companies, a division of Omnicom Group Inc. Hybrid Work Model: Our employees have the flexibility to work remotely two days per week. We expect our team members to spend 3 days per week in person with the flexibility to choose the days and times that work best for both them and their project or internal teams. This could be at a Credera office or at the client site. You'll work closely with your project team to align on how you balance both the flexibility that we want to provide with the connection of being together to produce amazing results for our clients. The why: We are passionate about growing our people both personally and professionally. Our philosophy is that in-person engagement is critical for our ability to develop deep relationships with our clients and our team members – it's how we earn trust, learn from others, and ultimately become better consultants and professionals. Travel : Our goal is to keep out-of-market travel to a minimum and most projects do not require significant travel. While certain projects can require frequent travel (up to 80% for a period of time), our average travel percentage over a year for team members is typically between 10-30%. We try to take a personal approach to travel. You will submit your travel preferences which our staffing teams will take into account when aligning you to a role. Credera will never ask for money up front and will not use apps such as Facebook Messenger, WhatsApp or Google Hangouts for communicating with you. You should be very wary of, and carefully scrutinize, any job opportunity that asks for money prior to starting and/or one where all communications take place exclusively via chat.

Posted 3 weeks ago

C
Credera Experienced Hiring Job BoardHouston, TX
Credera is a global consulting firm that combines transformational consulting capabilities, deep industry knowledge, AI and technology expertise to deliver valuable customer experiences and accelerated growth across various industries. We continuously evolve our services to meet the needs of future organizations and reflect modern best practices. Our unique global approach provides tailored solutions, transforming the most influential brands and organizations worldwide.   Our employees, the lifeblood of our company, are passionate about making an extraordinary impact on our clients, colleagues, and communities. This passion drives how we spend our time, resources, and talents. Our commitment to our people and work has been recognized globally. Please visit our employer awards page:  https://www.credera.com/awards-and-recognition .   As an Architect in Credera’s Data capability group, you will lead teams in implementing modern data architecture, data engineering pipelines, and advanced analytical solutions. Our projects range from designing and implementing the latest data platform approaches (i.e. Lakehouse, DataOps, Data Mesh) using best practices and cloud solutions, building scalable data and ML pipelines, democratizing data through modern governance approaches, and delivering data products using advanced machine learning, visualization, and integration approaches. You will act as the primary architect and technical lead on projects to scope and estimate work streams, architect and model technical solutions to meet business requirements, serve as a technical expert in client communications, and mentor junior project team members. On a typical day, you might expect to participate in design sessions, build data structures for an enterprise data lake or statistical models for a machine learning algorithm, coach junior resources, and manage technical backlogs and release management tools. Additionally, you will seek out new business development opportunities at existing and new clients. WHO YOU ARE: You have a minimum of 5 years of technical, hands-on experience building, optimizing, and implementing data pipelines and architecture Experience leading teams to wrangle, explore, and analyze data to answer specific business questions and identify opportunities for improvement You are a highly driven professional and enjoy serving in a fast-paced, dynamic client-facing role where delivering solutions to exceed high expectations is a measure of success You have a passion for leading teams and providing both formal and informal mentorship You have strong communication and interpersonal skills, and the ability to engage customers at a business level in addition to a technical level You have a deep understanding of data governance and data privacy best practices You incorporate the usage of AI tooling, efficiencies, and code assistance tooling in your everyday workflows You have a degree in Computer Science, Computer Engineering, Engineering, Mathematics, Management Information Systems or a related field of study  The ideal candidate will have recent technical knowledge of the following: Programming languages (e.g. Python, Java, C++, Scala, etc.) SQL and NoSQL databases (MySQL, DynamoDB, CosmosDB, Cassandra, MongoDB, etc.)  Data pipeline and workflow management tools (Airflow, Dagster, AWS Step Functions, Azure Data Factory, etc.) Stream-processing systems (e.g. Storm, Spark-Streaming, Pulsar, Flink, etc.) Data Warehouse design (Databricks, Snowflake, Delta Lake, Lake formation, Iceberg) MLOps platforms (Sagemaker, Azure ML, Vertex.ai, MLFlow)  Container Orchestration (e.g. Kubernetes, Docker Swarm, etc.) Metadata management tools (Collibra, Atlas, DataHub, etc.)  Experience with the data platform components on one or more of the following cloud service providers: AWS Google Cloud Platform Azure Basic Qualifications Thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success Contribute in a team-oriented environment Prioritize multiple tasks in order to consistently meet deadlines Creatively solve problems in an analytical environment Adapt to new environments, people, technologies and processes Excel in leadership, communication, and interpersonal skills Establish strong work relationships with clients and team members Generate ideas and understand different points of view    Learn More: Credera is part of the Omnicom Precision Marketing Group (OPMG), a division of Omnicom Group Inc. OPMG is a global network of agencies that leverage data, technology, and CRM to create personalized and impactful customer experiences. OPMG offers a range of services, such as data-driven product / service design, technology strategy and implementation, CRM / loyalty strategy and activation, econometric and attribution modelling, technical and business consulting, and digital experience design and development.   Benefits: Credera provides a competitive salary and comprehensive benefits plan. Benefits include health, mental health, vision, dental, and life insurance, prescriptions, fertility and adoption benefits, community service days, paid parental leave, PTO, 14 paid holidays, matching 401(k), Healthcare & Dependent Flexible Spending Accounts, and disability benefits. For more information regarding Omnicom benefits, please visit www.omnicombenefits.com .    Hybrid Working Model : Our employees have the flexibility to work remotely two days a week. We expect team members to spend three days in person, with the freedom to choose the days and times that best suit them, their project, and their teams. You'll collaborate with your project team to balance flexibility with the benefits of in-person connection, delivering outstanding results for our clients.   The Why: In-person engagement is essential for building strong relationships with clients and colleagues. It fosters trust, encourages learning, and helps us grow as consultants and professionals.   Travel : For our consulting roles, o ur goal is to minimize travel , and most projects do not require extensive travel. While some projects may involve up to 80% travel for a period, the annual average for team members is typically 10%–30%. We take a personal approach to travel by considering your submitted preferences when assigning roles.   All qualified applicants will receive consideration for employment without regard to race, color, religion, gender identity, sexual orientation, national origin, age, genetic information, veteran status, or disability.   Credera will never ask for money up front and will not use apps such as Facebook Messenger, WhatsApp or Google Hangouts for communicating with you. You should be very wary of, and carefully scrutinize , any job opportunity that asks for money prior to starting and/or one where all communications take place exclusively via chat.   

Posted 3 weeks ago

C
Credera Experienced Hiring Job BoardChicago, IL
We are looking for an enthusiastic Senior GenAI and LLM Architect to add to Credera’s Data capability group. Our ideal candidate is excited about leading project-based teams in a client facing role to analyze large data sets to derive insights through machine learning (ML) and artificial intelligence (AI) techniques.  They have strong experience in data preparation and analysis using a variety of tools and programming techniques, building and implementing models, and creating and running simulations. The Senior Architect should be familiar with the deployment of enterprise scale models into a production environment; this includes leveraging full development lifecycle best practices for both cloud and on-prem solutions across a variety of use cases.   You will act as the primary architect and technical lead on projects to scope and estimate work streams, architect and model technical solutions to meet business requirements and serve as a technical expert in client communications. On a typical day, you might expect to participate in design sessions, provision environments, and coach and lead junior resources on projects.   WHO YOU ARE: 8+ years of proven experience in the architecture, design, and implementation of large scale and enterprise grade AI/ML solutions, including hands-on statistical modeling and/or analytical experience in and industry or consulting setting Master’s degree in statistics, mathematics, computer science or related field (a PhD is preferred)  Experience with a variety of ML and AI techniques (e.g. multivariate/logistic regression models, cluster analysis, predictive modeling, neural networks, deep learning, pricing models, decision trees, ensemble methods, etc.) Proficiency in programming languages such as Python, TensorFlow, PyTorch, or Hugging Face Transformers for model development and experimentation Strong understanding of NLP fundamentals, including tokenization, word embeddings, language modeling, sequence labeling, and text generation Experience with data processing using LangChain, data embedding using LLMs, Vector databases and prompt engineering Advanced knowledge of relational and non-relational databases (SQL, NoSQL) Proficient in large-scale distributed systems (Hadoop, Spark, etc.) Experience with designing and presenting compelling insights using visualization tools (RShiny, R, Python, Tableau, Power BI, D3.js, etc.)  Passion for leading teams and providing both formal and informal mentorship Experience with wrangling, exploring, transforming, and analyzing datasets of varying size and complexity  Knowledgeable of tools and processes to monitor model performance and data quality, including model tuning experience Strong communication and interpersonal skills, and the ability to engage customers at a business level in addition to a technical level Stay current with AI/ML trends and research; be a thought leader in AI area Experience with implementing machine learning models in production environments through one or more cloud platforms:  Google Cloud Platform  Azure cloud services  AWS cloud services    Basic Qualifications Thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success Contribute in a team-oriented environment Prioritize multiple tasks in order to consistently meet deadlines Creatively solve problems in an analytical environment Adapt to new environments, people, technologies and processes Excel in leadership, communication, and interpersonal skills Establish strong work relationships with clients and team members Generate ideas and understand different points of view    Learn More Credera is a global consulting firm that combines transformational consulting capabilities, deep industry knowledge, and AI and technology expertise to deliver valuable customer experiences and accelerated growth across a broad range of industries worldwide. Our one-of-a-kind global boutique approach means we provide our clients with tailored solutions unique to their organization that can scale due to our extensive footprint. As a values-led organization, our mission is to make an extraordinary impact on our clients, our people, and our community. We believe it is this approach that has allowed us to work with and transform the most influential brands and organizations in the world, from strategy through to execution. More information is available at www.credera.com .  We are part of the OPMG Group of Companies, a division of Omnicom Group Inc.   Hybrid Work Model: Our employees have the flexibility to work remotely two days per week. We expect our team members to spend 3 days per week in person with the flexibility to choose the days and times that work best for both them and their project or internal teams. This could be at a Credera office or at the client site. You'll work closely with your project team to align on how you balance both the flexibility that we want to provide with the connection of being together to produce amazing results for our clients. The Why: We are passionate about growing our people both personally and professionally. Our philosophy is that in-person engagement is critical for our ability to develop deep relationships with our clients and our team members – it's how we earn trust, learn from others, and ultimately become better consultants and professionals. Travel : Our goal is to keep out-of-market travel to a minimum and most projects do not require significant travel. While certain projects can require frequent travel (up to 80% for a period of time), our average travel percentage over a year for team members is typically between 10-30%. We try to take a personal approach to travel. You will submit your travel preferences which our staffing teams will take into account when aligning you to a role.   Credera will never ask for money up front and will not use apps such as Facebook Messenger, WhatsApp or Google Hangouts for communicating with you. You should be very wary of, and carefully scrutinize, any job opportunity that asks for money prior to starting and/or one where all communications take place exclusively via chat.

Posted 3 weeks ago

Data Scientist -logo
Bertram Capital ManagementFoster City, CA
Bertram Capital is a private equity firm targeting investments in lower middle market companies. Since its inception in 2006, the firm has raised over $3.5B of capital commitments. Bertram has distinguished itself in the private equity community by combining venture capital operating methodologies with private equity financial discipline to empower its portfolio companies to unlock their full business potential. This approach is unique in that Bertram is not singularly focused on achieving its investment returns through financial engineering and the extraction of near-term cash flow. Instead, Bertram focuses on reinvestment and technology enablement to drive growth and value through digital marketing, e-commerce, big data and analytics, application development and internal and external platform optimization. Visit www.bcap.com for more information. Position Description We are seeking a versatile Data Scientist to join our team. As a generalist, you will leverage your skills in data processing, modeling, visualization, and prompt engineering to evaluate potential investments, solve critical business challenges and identify growth opportunities across multiple industry verticals, including consumer, industrial, and business services. Your work will directly impact investment decisions, revenue growth, operational efficiency, and the seamless integration of add-on acquisitions. This is a unique opportunity to work across multiple businesses, partnering with the Bertram Capital investment team, as well as the marketing, sales, and operational teams at portfolio companies. Experience in the investment management or financial industry is an advantage, as is exposure to working with large language models (LLMs) like ChatGPT. Responsibilities: Data Analysis & Processing: Collect, clean, and preprocess large datasets from diverse sources to ensure data quality and usability. Model Development: Build, evaluate, and deploy predictive and descriptive models to solve business problems, such as customer segmentation, demand forecasting, and operational optimization. Data Visualization: Create compelling visualizations and dashboards to communicate insights effectively to both technical and non-technical users. Cross-Functional Collaboration: Partner with management teams to make data-based decisions. Work with other members of Bertram Labs to develop marketing campaigns and internal tools. Revenue Growth & Operational Efficiency: Use analytics to identify opportunities for revenue optimization, operational improvements, and streamlined M&A integration processes. Leverage new AI technologies: Drive adoption of emerging generative AI technology within Bertram Capital and portfolio companies. Industry Research: Stay updated on trends in the consumer, industrial, and business services sectors to inform data-driven strategies.   Qualifications BS, MS, or PhD. in a quantitative field or equivalent work experience. 2+ years working in data science. Exceptional communication skills. Experience working cross-functionally and collaboratively. Proficient in SQL, Python and typical data science libraries. Proficient in extracting data from databases, APIs, web-scraping, and/or scripting. Proficient in business intelligence tools such as Tableau, PowerBI, or Looker. Experience in prompting and using LLMs such as ChatGPT, Claude, Gemini etc. Compensation and Benefits The expected salary range for this position is: $180,000- $210,000 total annual compensation. Offered salary may be based on a variety of factors including skills, experience, and qualifications for the role. After one year of tenure, employees will receive an additional annual bonus.   Comprehensive medical, dental, and vision benefits are provided at no cost to the employee. We offer a generous 401K match as well as a “take what you need” PTO policy. Other perks include: cell phone stipend, engaging team events and holiday parties. If hired, employee will be in an “at-will position” and the Company reserves the right to modify base salary (as well as any other discretionary payment or compensation program) at any time, including for reasons related to individual performance, Company or individual department/team performance, and market factors. Diversity, Equity, and Inclusion At Bertram Capital we value and celebrate the many perspectives that arise from a variety of cultures, genders, religions, national origins, ages, abilities, socioeconomic status and sexual orientation. Our commitment to Diversity, Equity and Inclusion (DEI) ensures that Bertram is a place that attracts, grows, and promotes top talent from all backgrounds.    

Posted 3 weeks ago

Engineering Manager, Company Data & Search -logo
GrataNew York, NY
New York, New York Grata is revolutionizing private market dealmaking. We make it easy to find, research, and engage with private companies by building the most comprehensive, accurate, and searchable proprietary data on private companies, their financials, and their owners, while working with leading edge tools such as generative AI and agentic workflows. Our customers — leading investors, investment bankers, management consultants, and corporate development teams — rely on Grata to uncover hidden opportunities and win more deals. With over 1,000 customers and consistent recognition from G2, PE Wire, and others, Grata is the clear market leader — but we’re only scratching the surface of what’s possible. We’re looking for a dynamic Engineering Manager who thrives on complexity and is passionate about building products that tackle the unique challenges of working with sophisticated datasets. This is an opportunity to lead a team shipping AI-enabled features that transform how users discover and engage with private company data and market intelligence. You’ll guide your team in building cutting-edge user applications that bring powerful insights to life, while driving outcomes that align technical execution with business impact. We are proud of our strong company culture, which is the cornerstone of our success. We value curiosity, collaboration, and a growth mindset. We foster an environment where every team member’s voice is heard, innovation is encouraged, and learning is a continuous process. Our values guide how we work — with integrity, empathy, and a relentless focus on excellence. Grata is a hybrid company, which means our employees work from our NYC office (near Bryant Park) on Mondays, Tuesdays and Thursdays. At Grata, we will expect you to: Building AI Enabled data pipelines to users Own and deliver roadmap for data pipelines and search infrastructure that enable core Grata experiences Drive improvements to our company data ingestion, entity resolution, and enrichment pipelines Ensure performance, uptime, and scalability of the search indexing stack (Elasticsearch / Postgres) Oversee the identification, management, and resolution of technical debt to ensure long-term scalability and performance. Foster the professional growth and career development of engineering team members through mentorship and guidance. Continuously enhance and optimize engineering processes to improve efficiency and quality. Build and lead a team of high-performing engineers with a strong focus on collaboration and excellence. Translate strong technical visions into actionable plans, providing guidance and support for execution. Manage an engineering team, balancing responsibilities with light coding tasks (80% management, 20% coding). Lead sprint planning, roadmapping, and collaborative exercises with Product teams. Partner closely with Product Managers and Engineering leadership to align and execute on engineering initiatives. What we are looking for: 2+ years of experience in engineering team leadership or management. In-depth understanding of product engineering and its lifecycle. Strong knowledge of full-stack development principles and best practices. Proven experience guiding teams working on application development projects. Demonstrated success in developing and advancing the careers of team members. Commitment to maintaining an exceptionally high standard for quality and engineering processes. Tech Stack: Python, Django, React, AWS, Elasticsearch, Postgres Benefits & Perks: Medical, dental, vision plans: we offer plans with 80% coverage of premiums for employees Company-sponsored lunch through Grubhub on a weekly basis Unlimited PTO policy  Flexible Work Location (FWL) policy that allows you to work from home 24 days of the year Other benefits: 12 weeks of parental leave, 401k, pre-tax commuter benefits, dog-friendly office Grata is committed to providing competitive cash compensation and benefits. The compensation offered for this role will be based on multiple factors such as location, the role’s scope and complexity, and the candidate’s experience and expertise, and may vary from the range provided below. For roles based in New York City, the estimated base salary for this role is $175,000 - $225,000 per year. Grata is proud to be an Affirmative Action, Equal Opportunity Employer. We do not discriminate in hiring or any employment decision based on race, color, religion, national origin, age, sex (including pregnancy, childbirth, or related medical conditions), marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or other applicable legally protected characteristic. Grata considers qualified applicants with criminal histories, consistent with applicable federal, state and local law. Grata is also committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures.  If you need assistance or an accommodation due to a disability, please let your recruiter know. In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire.

Posted 1 week ago

Data Scientist-logo
NT ConceptsVienna, VA
NTC OVERVIEW:   We are seeking a Data Scientist to join our team. Working at NT Concepts means that you are part of an innovative, agile company dedicated to solving the most critical challenges in National Security. We’re looking for the best and the brightest to join us in supporting this mission. If meaningful work, initiative, creativity, and continuous self-improvement are important to your career, join our growing team and discover What's Next for you. Mission Focus: As a Data Scientist, you will have the unique opportunity to research, design, and implement cutting edge algorithms for a program focused on protecting computer vision algorithms from adversarial AI attacks. This requires data curation, coding in Python with PyTorch, and production of model explainability and model performance visuals. Additionally, you will contribute to the program’s source code, implementing data curation and data science techniques. Our delivery teams are driven to explore new ideas and technology, and care deeply about collaboration, feedback, and iteration. We follow SAFe agile practices, embrace the Ops ethos (DataOps/DevSecOps/MLOps) to “automate-first”, use modern tech stacks, and constantly challenge each other to grow and improve.  If cutting edge data science projects resonate with you, and you care deeply about joining a mission-driven company with a strong growth direction and diverse culture, we'd love to learn more about you. Check out the details below, and let’s connect.  Technical members of our solutions teams require little guidance, but love to learn, collaborate, and problem solve. This position requires a junior to mid level of experience, a passion for mission support, and a strong desire to solve our customers’ hardest technical and data challenges.    Clearance : Ability to obtain a TS/SCI Clearance required. US Citizenship is required. Location/Flexibility: Vienna, VA with remote flexibility. Responsibilities:   Research, design, implementation, and evaluation of novel algorithms Implement algorithms in Python in repeatable and scalable way Drive requirements for data preprocessing ahead of algorithm development. Support creative delivery of solutions in a quickly evolving domain of AI/ML Qualifications: You have 2+ years of experience designing and implementing AI/ML techniques specifically those designed for imagery You have foundational knowledge of AI/ML methods and implementation strategies. You have worked with imagery data, both overhead or ground level imagery You desire a fast-paced, collaborative, Agile environment You are familiar with machine and deep learning libraries such as Scikit-learn and PyTorch You think critically about hard problems You are proficient with the Python programming language You have worked with Git version control systems You are no stranger to the Linux OS   Physical Requirements:   Prolonged periods sitting at a desk and working on a computer. Must be able to lift up to 10-15 pounds at times.   #CJ About NT Concepts Founded in 1998 and headquartered in the Washington DC Metro area, NT Concepts is a private, mid-tier company with clients spanning the Intelligence and Defense communities. We deliver end-to-end ​data and technology solutions ​that advance the modernization, transformation, and automation of the national security mission—solutions with real impact developed in a strong engineering culture that encourages technical growth, leadership, and creative “big idea” problem-solving. Employees are the core of NT Concepts. We understand that world-changing concepts happen in collaborative environments. We are a company where talented teams work together using innovation and expertise to solve our clients’ most critical challenges. Here, you’ll  gain competitive benefits , opportunities to bolster your skills and develop new abilities, and a company culture dedicated to support and service. In addition to our benefits program, we encourage our employees to take part in #NTC_GivesBack , which paves the way for positive social change. If joining a stable company with strong professional growth opportunities resonates with you, and you seek vital, mission-driven projects (for some pretty cool clients) that use your specific talents, we’d love to have you move forward with us.  

Posted 3 weeks ago

Senior Data Engineer-logo
Recorded FutureBoston, MA
With 1,000 intelligence professionals, over $300M in sales, and serving over 1,900 clients worldwide, Recorded Future is the world’s most advanced, and largest, intelligence company! Senior Data Engineer, Graph Quality  Recorded Future combats cyber security threats by delivering actionable intelligence from the Security Intelligence Graph .  We are looking for a Data Engineer to improve the Security Intelligence Graph. As a Data Engineer, you will build production-grade pipelines to drive data convergence from various sources within the graph, ensure that Indicators of Compromise (IOCs) are properly attributed, and improve the quality of the graph at scale. What You'll Do:  Work with the Graph Quality team to align, analyze, and ingest asset maps into the Security Intelligence Graph Develop, productize, monitor, and maintain data pipelines to analyze and ingest data at scale Build tools and APIs to facilitate access to data and analytics developed from the intelligence graph Analyze and explain patterns in data to drive business-critical decisions Create technical project plans and drive the successful execution of projects, with input from our Product team and other developers on the team Collaborate with Data Scientists, Data Engineers, and business leaders to develop and refine technical solutions Onboard and guide junior members of the team Assist in setting team goals, planning sprints, and leading Agile scrum meetings What You'll Bring:  4+ years of Python programming 2+ years of experience with cloud computing tools, e.g. from AWS, Azure, or Google Cloud Experience writing scalable, production-grade applications and ETL/ELT pipelines Efficient & accurate problem solving skills, including the ability to debug both software and data  Proven ability to analyze data and apply statistical techniques to draw accurate, impactful conclusions Proven success in delivering projects from design and implementation to release Excellent attention to detail & ability to work independently while delivering high-quality results Excellent written & verbal communication when collaborating with colleagues across various locations and timezones, designing technical approaches, and writing documentation  Eagerness to continue learning and teaching new skills to team members, in order to raise the bar across the team Preferred Qualifications: Familiarity with both batch and streaming pipelines Familiarity with any of the following: message buses (e.g. Kafka, RabbitMQ), NoSQL databases (e.g. MongoDB, AWS Neptune, Neo4j), ElasticSearch  Bachelor's/Master's degree in Computer Science, Mathematics, Statistics, Engineering, or equivalent experience Exposure to ML approaches, including experience productizing ML models Experience with developing REST APIs with Python frameworks (e.g. Flask, Django, FastAPI) Leadership experience, with a track record of presenting information to stakeholders with varying levels of technical expertise Why should you join Recorded Future? Recorded Future employees (or “Futurists”), represent over 40 nationalities and embody our core values of having high standards, practicing inclusion, and acting ethically. Our dedication to empowering clients with intelligence to disrupt adversaries has earned us a 4.8-star user rating from Gartner and more than 45 of the Fortune 100 companies as clients. Want more info?   Blog & Podcast : Learn everything you want to know (and maybe some things you’d rather not know) about the world of cyber threat intelligence Linkedin , Instagram  &  Twitter : What’s happening at Recorded Future The Record : The Record is a cybersecurity news publication that explores the untold stories in this rapidly changing field Timeline : History of Recorded Future Recognition : Check out our awards and announcements We are committed to maintaining an environment that attracts and retains talent from a diverse range of experiences, backgrounds and lifestyles.  By ensuring all feel included and respected for being unique and bringing their whole selves to work, Recorded Future is made a better place every day. If you need any accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to our recruiting team at careers@recordedfuture.com   Recorded Future is an equal opportunity and affirmative action employer and we encourage candidates from all backgrounds to apply. Recorded Future does not discriminate based on race, religion, color, national origin, gender including pregnancy, sexual orientation, gender identity, age, marital status, veteran status, disability or any other characteristic protected by law. Recorded Future will not discharge, discipline or in any other manner discriminate against any employee or applicant for employment because such employee or applicant has inquired about, discussed, or disclosed the compensation of the employee or applicant or another employee or applicant. Recorded Future does not administer a lie detector test as a condition of employment or continued employment. This is in compliance with the law of the Commonwealth of Massachusetts, and in alignment with our hiring practices across all jurisdictions. Notice to Agency and Search Firm Representatives: Recorded Future will not accept unsolicited resumes from any source other than directly from a candidate. Any unsolicited resumes sent to Recorded Future, including those sent to our employees or through our website, will become the property of Recorded Future. Recorded Future will not be liable for any fees related to unsolicited resumes. Agencies must have a valid written agreement in place with Recorded Future's recruitment team and must receive written authorization before submitting resumes. Submissions made without such agreements and authorization will not be accepted and no fees will be paid.

Posted 30+ days ago

S
SynaptiCure Inc.Chicago, IL
About Synapticure As a patient and caregiver-founded company, Synapticure provides instant access to expert neurologists, cutting-edge treatments and trials, and wraparound care coordination and behavioral health support in all 50 states through a virtual care platform. Partnering with providers and health plans, including CMS' new GUIDE dementia care model, Synapticure is dedicated to transforming the lives of millions of individuals and their families living with neurodegenerative diseases like Alzheimer’s, Parkinson’s, and ALS. Our data team powers the insights behind patient care, operational efficiency, and innovation in clinical outcomes. The Role Synapticure is seeking a highly motivated and experienced Full Stack Data Engineer to join our growing data team. Reporting to the Head of Data, you’ll work cross-functionally with engineering, clinical, product, finance, and operations teams to drive data-informed decision-making across the organization. You will own the full analytics lifecycle — from identifying data requirements and building pipelines, to modeling datasets, uncovering insights, and creating tools that help stakeholders act. Your work will directly influence strategic direction, improve patient outcomes, and unlock efficiencies across the company. The ideal candidate is relentlessly curious, thrives in ambiguity, and combines strong technical fluency with thoughtful communication. You’ll play a key role in stewarding high-integrity, actionable data and building a culture of data literacy and continuous improvement. Job Duties – What you’ll be doing Partner with cross-functional teams to scope and execute high-impact analytical projects Build validated, documented, and QA-tested datasets from multiple source systems Design and build dashboards and tools for clinical, product, and operational visibility Conduct exploratory and statistical analyses to identify opportunities and insights Own the full data lifecycle: requirements → pipeline → modeling → visualization → insight Maintain high standards of data quality through robust documentation and testing Clearly communicate insights and recommendations to both technical and non-technical stakeholders Contribute to our internal knowledge base and support data literacy initiatives Requirements – What we look for in you Proven experience driving decision-making as a Data Engineer or similar role Experience working with CMS, clinical, or healthcare data Strong SQL skills and experience with the modern data stack (e.g., dbt, Snowflake, Looker) Proficiency with data visualization tools (e.g., Knowi, Tableau, Power BI, Looker) Experience applying machine learning to healthcare use cases such as patient stratification, care pathway optimization, and forecasting Familiarity with cloud platforms (e.g., AWS or Google Cloud) Ability to turn ambiguous business questions into structured analysis Effective communicator comfortable working across teams Detail-oriented, proactive, and eager to learn Team player with humility, curiosity, and a collaborative spirit Preferred Qualifications Prior experience in healthcare, health-tech, biopharma, or startup environments Familiarity with clinical workflows or neurodegenerative care Experience with value-based care, cohort analytics, and patient funnel metrics We’re founded by a patient and caregiver, and we’re a remote-first company. This means our values are at the heart of everything we do, and while we’re located all across the country, these principles are what tie us together around a common identity: Relentless focus on patients and caregivers. We are determined to provide an exceptional experience for every patient we have the privilege to serve, and we put our patients first in everything we do. Embody the spirit and humanity of those living with neurodegenerative disease. Inspired by our founders, families, and personal experiences, we recognize the seriousness of our patients’ circumstances and meet that challenge every day with empathy, compassion, kindness, joy, and most importantly – with hope. Seek to understand, and stay curious. We start by listening to one another, our partners, our patients, and their caregivers. We communicate with authenticity and humility, prioritizing honesty and directness while recognizing we always have something to learn. Embrace the opportunity. We are energized by the importance of our mission and bias toward action. Travel Expectations This is a remote-first role. Minimal travel (up to 5–10%) may be required for team offsites or company-wide events. Salary & Benefits Competitive salary commensurate with experience Comprehensive medical, dental, and vision coverage 401(k) plan with employer matching Remote-first work environment with home office stipend Generous paid time off and sick leave Professional development support and career growth opportunities

Posted 30+ days ago

Data Operations Specialist-logo
Impact.comColumbus, OH
At  impact.com our culture is our soul. We are passionate about our people, our technology, and are obsessed with customer success. Working together enables us to grow rapidly, win, and serve the largest brands in the world. We use cutting edge technology to solve real-world problems for our clients and continue to pull ahead of the pack as the leading SaaS platform for businesses to automate their partnerships and grow their revenue like never before. We have an entrepreneurial spirit and a culture where ambition and curiosity is rewarded. If you are looking to join a team where your opinion is valued, your contributions are noticed, and enjoy working with fun and talented people from all over the world, then this is the place for you! impact.com , the world’s leading partnership management platform, is transforming the way businesses manage and optimize all types of partnerships—including traditional rewards affiliates, influencers, commerce content publishers, B2B, and more. The company’s powerful, purpose-built platform makes it easy for businesses to create, manage, and scale an ecosystem of partnerships with the brands and communities that customers trust to make purchases, get information, and entertain themselves at home, at work, or on the go. To learn more about how impact.com ’s technology platform and partnerships marketplace is driving revenue growth for global enterprise brands such as Walmart, Uber, Shopify, Lenovo, L’Oreal, and Fanatics visit www.impact.com . Your Role at impact.com: Are you passionate about data, and transforming the opaque into strategic clarity? We’re on a mission to transform our Customer Engineering organization (global customer support, technical services, onboarding, and implementation teams) into a data-first team - real-time KPIs, robust analytics, and actionable insights at our fingertips. We need your data mastery and operational genius to drive this transformation from the ground up. Join us to own, innovate, and strategically reshape our data operations, directly impacting the success of this global team. You'll be at the forefront of a major data-driven organizational transformation. Your work directly enhances team efficiency, strategic clarity, and operational excellence. You'll become a critical player influencing strategic decisions and future growth. What You'll Do: This role reports directly to the VP of Customer Engineering. Essential Responsibilities Lead Transformation: Drive the vision for clean, compliant, and strategic data across Salesforce, Freshdesk, Jira, and in-house internal systems. Optimize Structure: Critically evaluate and enhance underlying data structures and processes. Ensure Data Integrity: Enforce rigorous data compliance standards, implement ongoing quality assurance practices, and own cleaning of historical data. Operationalize Insight: Establish standardized KPIs and performance metrics to empower decision-making. Strategic Collaboration: Partner closely with our Data Analytics team to ensure seamless integration and accurate insights. Empower Teams: Set up intuitive, actionable dashboards, automate existing manual reports, and champion adoption across the organization. Lead Cadence: Implement effective data review cycles, ensuring continuous alignment and visibility on performance and trends. What You Have: 5+ years of experience in data operations, business operations, or related fields. Expertise in data compliance, data quality, and operational process improvement. Demonstrated success managing cross-functional data projects and initiatives. Fluency with modern data stacks (BigQuery, Looker, dbt, etc.). Demonstrated mastery of SQL and data querying Familiarity with ETL processes and data pipelines. Experience with data visualization tools (preference for Looker) Preferred Qualifications Deeply analytical with exceptional skills in data operations and process optimization. A highly organized and driven project manager capable of driving consensus across multiple stakeholders. Experienced in managing data structures in Salesforce and preferably Freshdesk and Jira. Adept at turning complex data scenarios into clear, streamlined, and actionable outcomes. Ability to translate technical data structures into clear, accessible language for non-technical stakeholders. Driven, autonomous, and proactive - excited by ownership, continuous improvement, and strategic problem-solving. Salary Range: $100,000.00 - $120,000.00 per year, plus an additional 5% Company annual bonus contingent on Company performance and eligibility to receive a Restricted Stock Unit (RSU) grant. *This is the pay range the Company believes is equitable for this position at the time of this posting. Consistent with applicable law, compensation will be determined based on the skills, qualifications, and experience of the applicant along with the requirements of the position, and the Company reserves the right to modify this pay range at any time. Benefits (Perks): Medical, Dental and Vision insurance Unlimited responsible PTO Flexible work hours Parental Leave Technology Stipend  Continued access to Affiliate & Partnerships Industry Fundamentals Certification by PXA Office-only catered lunch every Thursday, a healthy snack bar, and great coffee to keep you fueled.  Flexible spending accounts and 401(k) An employee-led culture team that plans inclusive events- meaning time together and other events to celebrate our many successes! An established company with a cool, high-velocity work ethos, where each person can make a difference! impact.com is proud to be an equal-opportunity workplace. All employees and applicants for employment shall be given fair treatment and equal employment opportunity regardless of their race, ethnicity or ancestry, color or caste, religion or belief, age, sex (including gender identity, gender reassignment, sexual orientation, pregnancy/maternity), national origin, weight, neurodivergence, disability, marital and civil partnership status, caregiving status, veteran status, genetic information, political affiliation, or other prohibited non-merit factors. #LI-Hybrid

Posted 2 weeks ago

Senior Data Analyst (Hybrid)-logo
EmpassionNew York, NY
About Empassion Empassion is a Management Services Organization (MSO) focused on improving the quality of care and costs on an often neglected “Advanced illness/ end of life” patient population, representing 4 percent of the Medicare population but 25 percent of its costs. The impact is driven deeper by families who are left with minimal options and decreased time with their loved ones. Empassion enables increased access to tech-enabled proactive care while delivering superior outcomes for patients, their communities, the healthcare system, families, and society. The Opportunity Join our high-impact Data & Analytics team to shape a modern, flexible analytics platform that powers Empassion’s mission. As a Senior Data Analyst, you’ll collaborate with analytics engineers and cross-functional partners—Growth, Product, Operations, and Finance—to turn complex data into actionable insights. Using tools like SQL, dbt, and Looker, you’ll build pipelines, models, and dashboards that decode patient care journeys and amplify our value to partners. This is a chance to influence both internal strategy and external impact from day one. What You’ll Do 🌟 Partner with teams across the business to pinpoint analytics needs and deliver solutions that solve real problems. 🔍 Dig into proprietary app data and third-party sources (e.g., medical claims) to map care journeys, assess provider performance, and fuel growth strategies. 👥 Support growth and partner strategy by analyzing medical claims to size opportunities, evaluate program impact, and surface insights that inform sales conversations and expansion priorities. 🚀 Enhance and scale data models with SQL and dbt, ensuring precision and adaptability for new partnerships. 📊 Craft intuitive Looker dashboards and Explores with LookML, empowering self-serve access to trusted metrics. 🤝 Team up with Product and Tech to evolve reporting as our platform grows, working in shared dev environments. 📝 Document processes and train users—technical and non-technical—to maximize tool adoption. ⏳ Balance your time across modeling (dbt), dashboarding (Looker), and ad hoc analysis. What You’ll Bring - 2–6 years in data analytics or analytics engineering, with a knack for turning data into insights and visuals that drive decisions. - SQL mastery—writing efficient, reliable queries on complex datasets. - Hands-on experience with dbt for modeling and Looker for dashboards/LookML. - Strong communication to bridge technical and non-technical worlds—think engineers, operators, and external partners. - A proactive mindset, thriving in a fast-paced setting with iterative problem-solving. - Curiosity about operational workflows and a drive to partner with non-technical teams, ensuring data and reporting align with how the business actually runs. You're not just a spec-taker, you're part of the solution. - Curiosity about healthcare workflows and a passion for patient impact. - A collaborative spirit, eager to build scalable, user-friendly tools. Bonus Points - Knowledge of healthcare data (claims, ADT feeds, eligibility files). - Experience with internally built apps alongside Product/Engineering teams. - Familiarity with Git/GitHub for version control. - Early-stage startup experience (seed/Series A), especially mission-driven ones. Why Empassion? Impact: End your day knowing your work shapes patient care and family experiences. Growth: Expand your skills with a team that prioritizes internal promotions. Team: Work with top-tier clinicians, operators, and technologists. Flexibility: Remote-first with a hybrid NYC option (2x/week in-person). We sync via Slack/Zoom, meet for biannual offsites, and travel as needed to build trust and momentum. Our Culture We’re a tight-knit, passionate crew holding ourselves to high standards—because our data directly affects lives. We’re remote-first, U.S.-distributed , and NYC-hybrid, prioritizing clear deliverables and weekly alignment. Expect a dynamic environment where you’ll flex across modeling, reporting, and analysis to meet evolving needs. Ready to Make a Difference? If you’re driven by data, healthcare, and impact, apply and let’s talk!

Posted 3 weeks ago

I
IMO HealthRosemont, IL
At IMO Health, Semantic Data Modelers are key members of our ontology-driven graph engineering team, helping to build and maintain a virtualized, intelligent, and scalable medical terminology platform. Your work will empower over 740,000 clinicians by enhancing how healthcare data is structured, delivered, and understood. We are seeking a highly experienced and strategic Senior Semantic Data Modeler to join our team, with a specialized focus on Knowledge Graphs. In this critical role, you will lead the design, development, and governance of complex semantic models that empower both human and machine understanding of our most vital clinical concepts and terminology relationships. You will serve as a key resource, bridging the gap between diverse raw data sources and strategic business needs by crafting a robust, consistent, and highly accessible knowledge layer. This position requires exceptional collaboration skills, working closely with our staff semantic engineers, clinicians, and content teams, a keen eye for defining intricate data structures, and a commitment to ensuring the highest data quality and accessibility for semantic enrichment and clinical interoperability initiatives. WHAT YOU'LL DO: Semantic Model Development: Drive the end-to-end design, development, and evolution of complex semantic data models, with a primary focus on ontologies, knowledge graphs, and property graphs. Strategically Translate Business Requirements: Transform intricate, cross-functional business needs into formal, scalable knowledge graph structures, ensuring tight alignment with the enterprise data strategy and long-term architectural vision. Define, Document, and Govern Semantic Assets: Establish best practices for, and create comprehensive documentation of, semantic models, including detailed entity definitions, relationship types, axioms, constraints, and data lineage, fostering clarity, consistency, and reusability across the organization. Cross-Functional Collaboration: Partner closely with staff semantic engineers , clinicians, content teams, and business leaders to deeply understand their domain knowledge and requirements, translate complex concepts into actionable models, and ensure that semantic solutions effectively meet organizational objectives. Implement Robust Data Quality & Consistency: Design and implement data quality frameworks, validation rules, and transformation logic within the semantic layer to ensure the accuracy, reliability, and consistency of the knowledge graph. Optimize and Scale Knowledge Graph Performance: Drive the optimization of knowledge graph structures, query performance, and usability for diverse data consumption scenarios, including advanced analytics, AI applications, and self-service initiatives. Innovate and Set Standards: Continuously research, evaluate, and recommend new technologies, methodologies, and best practices in semantic modeling, knowledge graph technologies, ontology engineering, and cloud-based analytics to drive continuous improvement. Mentor and Guide: Provide leadership and mentorship to junior data modelers and engineers, fostering a culture of knowledge sharing and excellence in semantic modeling practices. WHAT YOU'LL NEED: BA/BS in a STEM field with 7+ years of hands-on work experience with a significant portion dedicated to semantic modeling and knowledge graph development, including experience in a lead or senior capacity. Deep and demonstrated expertise in designing, building, and managing ontologies, knowledge graphs, and property graphs. Extensive experience with leading graph database platforms (e.g., Amazon Neptune) and advanced proficiency in graph query languages (e.g., SPARQL). Strong working knowledge of OWL, RDFS, SHACL, and other semantic web standards. Experience with enterprise data modeling tools (e.g., Erwin) and specialized ontology/graph modeling tools Strong understanding and hands-on experience with relational databases (SQL) and familiarity with NoSQL databases (e.g., PostgreSQL). Proven ability to communicate complex technical concepts effectively to both technical and non-technical stakeholders, and to lead collaborative efforts across diverse teams. Demonstrated ability to analyze complex data challenges, identify root causes, and architect strategic, scalable solutions within a semantic context. Practical experience with AWS services related to data and ELT methodologies is often preferred. Experience in an Agile/Scrum environment, iteratively developing and deploying data solutions. Bonus: Understanding of healthcare ontologies and standards like SNOMED-CT, LOINC, RxNorm, and ICD-10. Compensation at IMO Health is determined by job level, role requirements, and each candidate’s experience, skills, and location. The listed base pay represents the target for new hires with individual compensation varying accordingly. These figures exclude potential bonuses, equity, or sales incentives, which may also be part of the total compensation package. Our recruiter will provide additional details during the hiring process. IMO Health also offers a comprehensive benefits package. To learn more, please visit IMO Health’s Careers Page .

Posted 3 weeks ago

H
HealthCareChicago, IL
Join Us! HealthCare.com has become one of America’s fastest-growing insurtech companies, revolutionizing how consumers shop for health insurance. Leveraging advanced technology and data science, the company has developed customized proprietary products to better fit consumer requirements, enhance customer satisfaction, and take some of the guesswork and inefficiencies out of buying insurance. Job Overview We are seeking a Performance Marketing Analyst to join our analytics team, focusing on delivering data-driven insights and measurement to optimize paid media campaigns and improve customer acquisition efficiency . In this role, you will partner closely with our marketing and product teams to measure performance, optimize campaigns, and improve customer acquisition funnel conversion at scale. The ideal candidate has a strong analytical foundation, hands-on experience with paid media and customer acquisition funnels, and a track record of analyzing two-sided marketplaces and turning data into actionable insights Key Responsibilities Campaign Performance Analytics - Analyze and monitor marketing performance across Google, Bing, and Facebook Ads, identifying opportunities to improve efficiency and effectiveness. - Build and maintain dashboards and reports to track CAC, ROAS, CPA, CTR, CVR, and other core acquisition metrics. - Support weekly/monthly reporting cadences and proactively surface performance trends and anomalies. - Evaluate A/B and multivariate tests to assess impact and inform creative, bidding, and audience strategies. Attribution & Funnel Measurement - Support or build attribution models (first-touch, last-touch, multi-touch). - Analyze conversion funnels to pinpoint drop-off points and user friction. - Optimize marketing acquisition channels with funnel experiences. - Help connect paid media spend to downstream actions. Forecasting & Budget Planning - Collaborate with marketing and finance to build spend forecasts, acquisition projections, and channel ROI models. - Analyze marginal CAC and incremental spend impact to inform budget allocation. Experimentation & A/B Testing - Design and evaluate creative tests, landing page variants, bidding strategies, audience segments, and conversion funnel variants. - Ensure statistical rigor in test setup and interpret results for incremental lift. Reporting & Automation - Build and maintain dashboards (e.g., Tableau, Looker, Power BI) to surface daily and weekly performance trends. - Automate routine reporting workflows and collaborate with data engineers for scalable pipelines. Stakeholder Communication - Present insights to marketing, product, growth, and executive stakeholders. - Translate data findings into clear recommendations and strategic guidance. Required Skills & Experience - 3+ years of experience in marketing analytics, growth analytics, or digital media analytics. - Hands-on experience analyzing performance for Google Ads, Bing Ads, and Facebook/Meta Ads campaigns. - Hands-on experience analyzing customer conversion funnels. - Strong command of SQL and experience working with large-scale marketing or customer datasets. - Proficiency in data visualization tools such as Tableau, Looker, Power BI, or sigma. - Solid understanding of digital marketing metrics (CAC, ROAS, CPA, CVR, LTV) and campaign tracking methods. - Experience analyzing A/B experiments for statistical significance. - Ability to clearly communicate technical findings to non-technical stakeholders. - Strong problem-solving skills and a collaborative, solutions-oriented mindset. Preferred Qualifications - Experience with Google Analytics, Google Tag Manager, and UTM/campaign tracking. - Familiarity with incrementality testing, attribution models, or multi-touch attribution platforms. - Exposure to data warehousing environments such as Snowflake, BigQuery, or Redshift. - Experience working in a subscription, eCommerce, or high-growth consumer environment. - Basic familiarity with Python or R for data analysis is a plus. Benefits Opportunity to work from home Excellent work environment Medical, dental, and vision insurance Up to 15 days of paid time off 11 company observed holidays 8 weeks of paid parental leave 401k plan with company match Life insurance Professional growth opportunity Most importantly, an inclusive company culture established by an incredible team! Get to Know Us! https://www.healthcare.com/ linkedin.com/company/healthcare-com

Posted 30+ days ago

AWS Data Engineer (Senior)-logo
MactoresSeattle, WA
Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization. Mactores is seeking an AWS Data Engineer (Senior) to join our team. The ideal candidate will have extensive experience in PySpark and SQL, and have worked with data pipelines using Amazon EMR or Amazon Glue. The candidate must also have experience in data modeling and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, Presto, and orchestration experience using Airflow. What you will do? Develop and maintain data pipelines using Amazon EMR or Amazon Glue. Create data models and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, and Presto. Build and maintain the orchestration of data pipelines using Airflow. Collaborate with other teams to understand their data needs and help design solutions. Troubleshoot and optimize data pipelines and data models. Write and maintain PySpark and SQL scripts to extract, transform, and load data. Document and communicate technical solutions to both technical and non-technical audiences. Stay up-to-date with new AWS data technologies and evaluate their impact on our existing systems. What are we looking for? Bachelor's degree in Computer Science, Engineering, or a related field. 3+ years of experience working with PySpark and SQL. 2+ years of experience building and maintaining data pipelines using Amazon EMR or Amazon Glue. 2+ years of experience with data modeling and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, and Presto. 1+ years of experience building and maintaining the orchestration of data pipelines using Airflow. Strong problem-solving and troubleshooting skills. Excellent communication and collaboration skills. Ability to work independently and within a team environment. You are preferred if you have AWS Data Analytics Specialty Certification Experience with Agile development methodology Life at Mactores We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work. 1. Be one step ahead 2. Deliver the best 3. Be bold 4. Pay attention to the detail 5. Enjoy the challenge 6. Be curious and take action 7. Take leadership 8. Own it 9. Deliver value 10. Be collaborative We would like you to read more details about the work culture on https://mactores.com/careers The Path to Joining the Mactores Team At Mactores, our recruitment process is structured around three distinct stages: Pre-Employment Assessment: You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role. Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities. HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team. At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles. Note: Please answer as many questions as possible with this application to accelerate the hiring process.

Posted 3 weeks ago

R
RoNew York, NY
Ro is a direct-to-patient healthcare company with a mission of helping patients achieve their health goals by delivering the easiest, most effective care possible. Ro is the only company to offer nationwide telehealth, labs, and pharmacy services. This is enabled by Ro's vertically integrated platform that helps patients achieve their goals through a convenient, end-to-end healthcare experience spanning from diagnosis, to delivery of medication, to ongoing care. Since 2017, Ro has helped millions of patients, including one in every county in the United States, and in 98% of primary care deserts. Ro has been recognized as a Fortune Best Workplace in New York and Health Care for four consecutive years (2021-2024). In 2023, Ro was also named Best Workplace for Parents for the third year in a row. In 2022, Ro was listed as a CNBC Disruptor 50. We are looking for a Senior Software Engineer to join our Data Infrastructure team. In this role, you will design, build, and maintain scalable, reliable, and secure data infrastructure powering analytics, AI applications, and data-driven products. You'll collaborate closely with data scientists, product engineers, and machine learning engineers to enhance data accessibility, governance, and efficiency, tackling complex technical challenges with pragmatism and long-term vision. What You'll Do: Design, develop, and maintain data infrastructure capabilities that enable teams to effectively and securely produce, consume, and utilize data from internal systems and third-party platforms. Example projects include: Building a platform for deploying AI-powered and LLM-backed applications that leverage production-grade, governed data. Scaling analytics pipelines and data models to support a rapidly growing and increasingly complex business. Enabling data discovery, lineage, and quality monitoring to support trust and compliance in AI training and inference pipelines. Implementing change data capture (CDC) for Postgres and Kafka to support real-time data availability. Adopting modern table formats (e.g., Apache Iceberg) to enable incremental processing and time travel for ML features and analytical workflows. Operationalizing a security classification and governance framework to support responsible AI and data privacy. Generating synthetic datasets that replicate production schema and distributions for safe development, testing, and AI model evaluation. What You'll Bring to the Team Strong programming skills in Python and/or Go, with a track record of delivering reliable, well-tested systems. Deep knowledge of data engineering fundamentals, including warehousing, modeling, and transformation (e.g., dbt, SQL). Experience designing and building infrastructure that supports both batch and streaming data processing at scale. Experience building infrastructure that supports AI and ML workflows, such as feature generation, model inference, or vectorized search. Familiarity with data lakehouse formats (e.g., Apache Iceberg, Delta Lake) and the benefits they provide for AI-scale workloads. A strong understanding of data governance, security, and compliance principles, especially in regulated or privacy-sensitive domains. Proficiency with cloud-native data infrastructure in AWS (e.g., EKS, S3, IAM) and infrastructure-as-code (e.g., Terraform, Pulumi). Proven ability to collaborate with cross-functional partners in a fast-paced, security-conscious environment. Bonus: experience enabling LLM use cases through retrieval-augmented generation (RAG), vector search, or synthetic data generation. We've Got You Covered: Full medical, dental, and vision insurance + OneMedical membership Healthcare and Dependent Care FSA 401(k) with company match Flexible PTO Wellbeing + Learning & Growth reimbursements Paid parental leave + Fertility benefits Pet insurance Student loan refinancing Virtual resources for mindfulness, counseling, and fitness We welcome qualified candidates of all races, creeds, genders, and sexuality to apply. The target base salary for this position ranges from $175,100 to $211,500, in addition to a competitive equity and benefits package (as applicable). When determining compensation, we analyze and carefully consider several factors, including location, job-related knowledge, skills and experience. These considerations may cause your compensation to vary. Ro recognizes the power of in-person collaboration, while supporting the flexibility to work anywhere in the United States. For our Ro’ers in the tri-state (NY) area, you will join us at HQ on Tuesdays and Thursdays. For those outside of the tri-state area, you will be able to join in-person collaborations throughout the year (i.e., during team on-sites). At Ro, we believe that our diverse perspectives are our biggest strengths — and that embracing them will create real change in healthcare. As an equal opportunity employer, we provide equal opportunity in all aspects of employment, including recruiting, hiring, compensation, training and promotion, termination, and any other terms and conditions of employment without regard to race, ethnicity, color, religion, sex, sexual orientation, gender identity, gender expression, familial status, age, disability and/or any other legally protected classification protected by federal, state, or local law. See our California Privacy Policy here .

Posted 3 weeks ago

Senior Data Analyst-logo
Strategic Data SystemsDahlgren, VA
Data Analyst Dahlgren Naval Surface Warfare Center, Dahlgren, VA Salary negotiable (Dependent on experience level) - Full Time with Benefits Flexible Start-Date – Contingent on contract award The Data Analyst is part of a business intelligence and data warehouse team tasked with providing data analysis, data design, systems development, training, and customer support. Responsibilities include: • Systems development activities include data table/structure design, data acquisition, system specifications, data validation and documentation. • Data Analyst support will also include development of data interface specifications, on-line query and report specifications, database load specifications, and data validation specifications. • Customer support, including interfacing with customers, trouble shooting, product review and analysis and problem resolution. To qualify, you will need: • An Active DOD Secret Security Clearance • High School Diploma and six (6) or more years of experience • Experience with data warehouse development and design, data knowledge acquisition, legacy conversion specifications and design of data structures/load specifications, and knowledge of Working Capital Fund and Enterprise Resource Planning (ERP), financial, and human resources systems methods and strategies for data warehousing. • The ability to apply analytical skills relating to database management and development • Knowledge of relational database management systems and data warehousing • Experience using Cognos Tool Suite to create reports and dashboards • Skill in writing and analyzing structured query language (SQL) queries • Experience with requirements developing definition and application designs in the form of logical data models, data interface specifications, on-line query and report specifications, structured query language (SQL), database performance loading specifications, and data validation specifications . • Ability to communicate effectively orally and in writing for required customer support and requirements gathering and validation • Knowledge of analytical tools and evaluation techniques • Knowledge of government accounting (highly desired: DOD Accounting) • Ability to communicate effectively with all levels of employees and outside contacts • Strong interpersonal skills and good judgment with the ability to work alone or as part of a team Strategic Data Systems provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.

Posted 3 weeks ago

Cloud Data Engineer-logo
Datalab USAGermantown, MD
DataLab USA is an analytics and technology driven database marketing consultancy. We combine sophisticated technology, cutting edge analytics and an intrinsic understanding of marketing to build large-scale addressable marketing programs for Fortune 500 companies. Our clients operate in multiple verticals: Financial Services, Insurance, Telcom, and Travel & Leisure. DataLab’s Enterprise Product Management Team is at the core of deploying value creation for our clients. Our technology team’s problem solving, and out of box thinking translate into positive ROI and success for our clients. The primary purpose of this position is to develop data processing solutions and associated technical services designed to support client business objectives. Candidates must be technically proficient and possess good interpersonal, troubleshooting, and documentation skills. **No sponsorship available for this position** **Candidate must be local to Germantown, MD** Key Responsibilities Leverage Python, SQL and cloud-based technology to build scalable and secure data processing software for continuous integration and delivery Work on a wide range of interesting technical and business problems to support big data processing Support the build of new solutions to a wide array of complex system design challenges Work with team leads to prototype, design, and implement internal process improvements: automating manual processes, optimizing data delivery, building infrastructure for greater scalability Support infrastructure and applications to process large amounts of data in a distributed computing environment Write automated test frameworks for use across projects Develop and maintain software using already established best practices for creating readable and maintainable code Debug and resolve software defects Act as the first-level support for existing production applications through Software Development Lifecycle (SDLC) Prepare and maintain solution documentation Required Skills and Qualifications Bachelor’s in Computer Science Experience developing applications in Python Familiarity with SQL and working with relational databases Experience with cloud platforms (Snowflake, AWS, Azure, or GCP) and cloud-native data services Strong problem-solving skills and attention to detail Experience using third-party Python frameworks Good understanding of Python data structures and object-oriented programming concepts Outstanding coding skills, knowledge of patterns and best practices in an object-oriented style Teamwork, strong inter-personal skills Preferred Qualifications: 1-3 years of experience building Python or Cloud applications Familiarity with API development frameworks (e.g., Flask, REST). Familiarity developing in a distributed environment Experience with data pipeline frameworks (e.g., Airflow, dbt, Spark, or similar) Experience with processing large amounts of data with Big Data frameworks Familiarity with CI/CD tools and DevOps practices Familiarity with secure coding guidelines and standards The base pay range provided serves as a general guideline. The final annual salary offered to the selected candidate will vary based on factors such as qualifications, skill level, competencies, the scope and responsibilities of the role We are proud to offer a comprehensive benefits package designed to support the well-being and financial security of our employees. Our benefits include: Health, Dental, and Vision Plans : Comprehensive coverage to meet your healthcare needs. Employee Assistance Program (EAP) : Resources and support for personal and professional challenges. 401(k) Retirement Savings Plan : Includes option for Traditional or Roth IRA to help you plan for your future. Paid Time Off : Enjoy paid vacation and sick leave to maintain work-life balance. Company Holidays : Nine paid company holidays throughout the year. DataLab USA ™ is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity or national origin. All offers of employment are contingent on passing a background check and drug test. Privacy Policy - DataLab USA™ | Targeting Better Results

Posted 3 weeks ago

Torc Robotics logo
Principal, Software Engineer - Data & Enablement
Torc RoboticsAnn Arbor, MI

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

About the Company


At Torc, we have always believed that autonomous vehicle technology will transform how we travel, move freight, and do business.


A leader in autonomous driving since 2007, Torc has spent over a decade commercializing our solutions with experienced partners. Now a part of the Daimler family, we are focused solely on developing software for automated trucks to transform how the world moves freight.


Join us and catapult your career with the company that helped pioneer autonomous technology, and the first AV software company with the vision to partner directly with a truck manufacturer.

Meet The Team: 


TORC is looking for an experienced principal engineer to serve as the architect and pragmatic visionary for data ecosystem for autonomous vehicle technology. This role will play a pivotal role in the success of the organization and comes with high visibility, responsibility, and technical impact.  This person must have a strong technical foundation in building and delivering high-performance, resilient, and scalable cloud computing solutions and will bring strategic insights, technical acumen, mentorship and facilitate collaboration across the organization.  


What You'll Do: 



  • Drive and grow the strategic technical vision for the Data & Enablement Division--responsible for foundational services and systems used by engineers and customers to build and support our autonomous trucking platform.

  • Work within Torc’s Principal Community to mature our technical vision and drive technical direction across the organization.

  • Collaborate with stakeholders to understand requirements and design scalable and maintainable software solutions that support the broader Technology organization.

  • Be a role-model and set the standards of highest-level technical excellence and rigor within the Data & Enablement Division  

  • Provide technical leadership and guidance to engineering teams in the Data & Enablement Division.

  • Participate in design and code reviews, providing constructive feedback to ensure high-quality solutions that adhere to established standards and practices.

  • Mentor and guide division engineers, assisting in their technical growth and fostering a culture of learning and development within the division.

  • Troubleshoot and debug the most critical issues, determining the root causes, implementing appropriate solutions, and setting up safeguards against recurrences.

  • Be able to analyze, and mentor others to analyze, system performance to implement necessary optimizations to enhance speed, efficiency, and scalability.

  • Participate in project planning and collaborate with technical product managers on the priorities and customer expectations of the proposed software solutions.

  • Stays up to date with the latest industry trends, technologies, and best practices for potential integration with existing solutions. 


What You'll Need to Succeed:



  • Bachelor’s Degree in Computer Science, Robotics, Electrical Engineering or related technical field plus demonstrates competences and technical proficiencies typically acquired through 20+ years of experience OR Master’s Degree in Computer Science, Robotics, Electrical Engineering or related technical field plus demonstrates competences and technical proficiencies typically acquired through 10+ years of experience 

  • Ten-plus years of experience building and maintaining workloads in public cloud environments

  • Strong technical communication skills, written and verbal, that scale to a diverse workforce

  • Strong problem-solving skills with the ability to analyze and understand complex software system issues and evolving technical challenges

  • Strong proficiency in Python and a commitment to test-driven development patterns, continuous integration and delivery, and infrastructure as code

  • Strong ability to align technical objectives to business values and articulate the associated business value of technical work

  • Strong time management and organization skills to plan, develop, prioritize effectively, and maintain competing demands simultaneously with frequent interruptions and in fast-paced environment

  • Ability to facilitate and drive collaborative engagement across technical teams in a large engineering organization in person and virtually

  • Knowledge of AWS serverless architectures (Lambda, Batch, ECS Fargate, Glue, Athena) is preferred

  • Familiarity with robotics and advanced driver assistance systems is preferred

  • Experience developing data warehousing, data lake or data mesh solutions is preferred

  • Experience scaling software and infrastructure architectures for simulation environments is preferred 


Perks of Being a Full-time Torc’r


Torc cares about our team members and we strive to provide benefits and resources to support their health, work/life balance, and future. Our culture is collaborative, energetic, and team focused. Torc offers:



  • A competitive compensation package that includes a bonus component and stock options

  • 100% paid medical, dental, and vision premiums for full-time employees

  • 401K plan with a 6% employer match

  • Flexibility in schedule and generous paid vacation (available immediately after start date)

  • Company-wide holiday office closures

  • AD+D and Life Insurance

Hiring Range for Job Opening 
US Pay Range
$226,400$271,700 USD

At Torc, we’re committed to building a diverse and inclusive workplace. We celebrate the uniqueness of our Torc’rs and do not discriminate based on race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, veteran status, or disabilities.


Even if you don’t meet 100% of the qualifications listed for this opportunity, we encourage you to apply. 

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall