landing_page-logo
  1. Home
  2. »All Job Categories
  3. »Data Science Jobs

Auto-apply to these data science jobs

We've scanned millions of jobs. Simply select your favorites, and we can fill out the applications for you.

California Life Company logo
California Life CompanySouth San Francisco, CA
Who We Are: Calico (Calico Life Sciences LLC) is an Alphabet-founded research and development company whose mission is to harness advanced technologies and model systems to increase our understanding of the biology that controls human aging. Calico will use that knowledge to devise interventions that enable people to lead longer and healthier lives. Calico's highly innovative technology labs, its commitment to curiosity-driven discovery science and, with academic and industry partners, its vibrant drug-development pipeline, together create an inspiring and exciting place to catalyze and enable medical breakthroughs. Position Description: Calico is seeking a Data Scientist/Senior Data Scientist to join the statistical genetics team within the Computational Sciences group. In this position, you will develop and apply cutting-edge computational methods to analyze unique biobank-scale datasets (e.g. UK Biobank) to identify potential drug targets for age-related disease. Two major areas of focus will be analysis of longitudinal datasets to identify factors modulating the trajectory of age-related decline and the analysis of high-dimensional phenotypes. The successful candidate will join a vibrant research community and work closely with internal and external scientific collaborators and will be expected to contribute to the design of target discovery or validation efforts. Position Responsibilities: Develop and apply computational methods suitable for biobank-scale complex or high-dimensional phenotypic datasets from both public and proprietary data sources Conceive, design, and execute studies to interrogate the genetic basis of age-related complex traits and of aging trajectories in large human cohorts Integrate multiple data sources (e.g. clinical data, genetics, 'omics) to develop therapeutic hypotheses for age-related disease Contribute to software and/or workflows for the analysis of cohort data across multiple research projects and development programs Collaborate with and communicate findings effectively to researchers from a broad range of scientific backgrounds, both internally and externally Position Requirements: Ph.D. in genetics, statistics, statistical genetics, computational biology, or equivalent Track record of developing and applying new computational and statistical methods tailored to analyzing novel datasets Experience with the statistical genetic toolkit for complex traits (e.g. GWAS, gene burden tests, statistical finemapping, LD score regression, eQTL/pQTL mapping, colocalization, polygenic risk scores, Mendelian randomization), including methods for ancestrally diverse populations Experience with analyzing large, high-dimensional clinical and/or molecular datasets (for example, imaging, genomics and other 'omics, longitudinal data) Familiarity with large human cohort studies (e.g. UK Biobank, FinnGen, All of Us) Strong coding skills in Python and/or R, including experience developing software and/or workflows that can be readily used by others Strong interpersonal, written, and verbal communication skills, including collaborating with stakeholders from different scientific disciplines Must be able to work onsite at least 4 days a week The estimated base salary range for this role is $120,000 - $185,000. Actual pay will be based on a number of factors including experience and qualifications. This position is also eligible for two annual cash bonuses.

Posted 1 week ago

Pacific Life logo
Pacific LifeNewport Beach, CA
Job Description: Providing for loved ones, planning rewarding retirements, saving enough for whatever lies ahead - our policyholders count on us to be there when it matters most. It's a big ask, but it's one that we have the power to deliver when we work together. We collaborate and innovate - pushing one another to transform not just Pacific Life, but the entire industry for the better. Why? Because it's the right thing to do. Pacific Life is more than a job, it's a career with purpose. It's a career where you have the support, balance, and resources to make a positive impact on the future - including your own. Unlock the Power of Data at Pacific Life. We're seeking talented Lead Data Engineer to join our Pacific Life Investments Data Team onsite in Newport Beach, CA. We are looking for self-starters to help shape the future of data engineering and drive data-driven success. As a Lead Data Engineer you'll move Pacific Life, and your career, forward by accelerating our data initiatives by bringing modern technical solutioning forward. You will fill a new role that sits on the Investments data team in the technology organization. Your colleagues will include scrum masters and data analyst and fellow Data, AI, Governance, and QA professionals. Join our highly collaborative, innovative team. How you'll help move us forward: Partner with data architects, analysts, engineers, and stakeholders to understand data requirements and deliver solutions. Help build scalable products with robust security, quality and governance protocols. Create low-level design artifacts, including mapping specifications. Build scalable and reliable data pipelines to support data ingestions (batch and /or streaming) and transformation from multiple data sources using SQL, AWS, Snowflake, and data integration technologies. Create unit/integration tests and implement automated build and deployment. Participate in code reviews to ensure standards and best practices. Deploy, monitor, and maintain production systems. Use the Agile Framework to organize, manage and execute work. Demonstrate adaptability, initiative and attention to detail through deliverables and ways of working. The experience you bring: Bachelor's degree in Computer Science, Data Science or Statistics 7+ years of experience in analysis, design, development, and delivery of data 7+ years of experience and proficiency in SQL, ETL, ELT, leading cloud data warehouse technologies, data transformation, and data management tools Understanding of data engineering best practices and data integration patterns 2+ years of experience with DevOps and CI/CD 1+ years of experience (not just POC) in using Git and Python Agile Scrum work experience Effective communication & facilitation; both verbal and written Team-Oriented: Collaborating effectively with team and stakeholders Analytical Skills: Strong problem-solving skills with ability to breakdown complex data solutions What makes you stand out: Investments or FINTECH domain knowledge preferred Strong Data analysis skills and /or data mining experience Experience with one or more Integration tools (Matillion, Informatica, SQL SSIS, DBT) Experience with Snowflake and DBT Works independently with minimal guidance. #LI-DW1 You can be who you are. People come first here. We're committed to a an inclusive workforce. Learn more about how we create a welcoming work environment at www.pacificlife.com. What's life like at Pacific Life? Visit Instagram.com/lifeatpacificlife. Benefits start Day 1. Your wellbeing is important. We're committed to providing flexible benefits that you can tailor to meet your needs. Whether you are focusing on your physical, financial, emotional, or social wellbeing, we've got you covered. Prioritization of your health and well-being including Medical, Dental, Vision, and a Wellbeing Reimbursement Account that can be used on yourself or your eligible dependents Generous paid time off options including Paid Time Off, Holiday Schedules, and Financial Planning Time Off Paid Parental Leave as well as an Adoption Assistance Program Competitive 401k savings plan with company match and an additional contribution regardless of participation. Base Pay Range: The base pay range noted represents the company's good faith minimum and maximum range for this role at the time of posting. The actual compensation offered to a candidate will be dependent upon several factors, including but not limited to experience, qualifications and geographic location. Also, most employees are eligible for additional incentive pay. $134,280.00 - $164,120.00 Your Benefits Start Day 1 Your wellbeing is important to Pacific Life, and we're committed to providing you with flexible benefits that you can tailor to meet your needs. Whether you are focusing on your physical, financial, emotional, or social wellbeing, we've got you covered. Prioritization of your health and well-being including Medical, Dental, Vision, and Wellbeing Reimbursement Account that can be used on yourself or your eligible dependents Generous paid time off options including: Paid Time Off, Holiday Schedules, and Financial Planning Time Off Paid Parental Leave as well as an Adoption Assistance Program Competitive 401k savings plan with company match and an additional contribution regardless of participation EEO Statement: Pacific Life Insurance Company is an Equal Opportunity /Affirmative Action Employer, M/F/D/V. If you are a qualified individual with a disability or a disabled veteran, you have the right to request an accommodation if you are unable or limited in your ability to use or access our career center as a result of your disability. To request an accommodation, contact a Human Resources Representative at Pacific Life Insurance Company.

Posted 30+ days ago

Guidehouse logo
GuidehouseSan Antonio, TX
Job Family: Data Science Consulting Travel Required: Up to 25% Clearance Required: Ability to Obtain Public Trust What You Will Do: We are seeking an experienced Data Engineer to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical contributor, responsible for designing and implementing scalable data pipelines and interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation. This is a strategic delivery role for someone who thrives at the intersection of data engineering, cloud platforms, and public sector analytics. Client Leadership & Delivery Collaborate with FCA clients to understand data architecture and reporting needs. Lead the development of ETL pipelines and dashboard integrations using Databricks and Tableau. Ensure delivery excellence and measurable outcomes across data migration and visualization efforts. Solution Development & Innovation Design and implement scalable ETL/ELT pipelines using Spark, SQL, and Python. Develop and optimize Tableau dashboards aligned with federal reporting standards. Apply AI/ML tools to automate metadata extraction, clustering, and dashboard scripting. Practice & Team Leadership Work closely with data architects, analysts, and cloud engineers to deliver integrated solutions. Support documentation, testing, and deployment of data products. Mentor junior developers and contribute to reusable frameworks and accelerators. What You Will Need: US Citizenship is required Bachelor's degree is required Minimum TWO (2) years of experience in data engineering and dashboard development Proven experience with Databricks, Tableau, and cloud platforms (AWS, Azure) Strong proficiency in SQL, Python, and Spark Experience building ETL pipelines and integrating data sources into reporting platforms Familiarity with data governance, metadata, and compliance frameworks Excellent communication, facilitation, and stakeholder engagement skills What Would Be Nice To Have: AI/LLM Certifications Experience working with FCA clients such as DOT, GSA, USDA, or similar Familiarity with federal contracting and procurement processes What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.

Posted 1 week ago

E logo
Epic-BioSouth San Francisco, CA
Who Are We? Our company is based on the science of our founder, Stanley Qi, one of the original CRISPR co-inventors who then furthered the technology so that DNA does not need to be cut to accomplish gene regulation. Instead, we regulate the epigenome to suppress and activate multiple genes simultaneously. We are further evolving the platform and leveraging its strengths to address unmet medical needs. We are looking for exceptional team members who want an active role in building a rapidly growing biotech. Epicrispr Biotechnologies is seeking a highly motivated Associate Data Scientist/Data Scientist I This position offers an exciting opportunity to contribute to the development of innovative epigenome engineering tools for therapeutic applications. As a key member of the BDS team, the Data Scientist will collaborate closely with experimental biologists to support experimental design and will take the lead in analyzing and interpreting high-throughput biological data from a range of cutting-edge screens and assays. This role requires a blend of ambition, technical expertise, and independence, with a strong ability to manage multiple projects simultaneously. The ideal candidate will be an excellent communicator with a proven track record of developing bioinformatics pipelines and solving complex problems in novel research areas. Key Responsibilities: Lead collaborative efforts with experimental biologists and bioengineers on experimental design and data analysis approaches to drive strategic therapeutic and engineering goals. Analyze complex high-throughput datasets. Independently design, implement, and optimize bespoke analysis pipelines tailored for in-house screening efforts. Communicate analysis results effectively across teams, bridging the gap between experimental scientists, data scientists, and non-technical stakeholders. Contribute to and support the continuous improvement of bioinformatics tools, workflows, and methods. Required Qualifications: Masters required or Ph.D. preferred in Bioinformatics, Computational Biology, Biophysics, Genetics/Genomics, Bioengineering, Computer Science, Statistics, or a related quantitative field, with 0-2 years of experience in a relevant industry or academic environment; or MS in one of the above fields with 2-4 years of industry/academic experience, or BS in a relevant field with 5+ years of industry/academic experience. Proficiency in programming languages such as Python or R (preferably both), with a focus on exploratory data analysis and pipeline development. Demonstrated ability to effectively communicate complex scientific concepts and results to both technical and non-technical stakeholders. Strong experience with experimental design, working cross-functionally with wet-lab scientists, and contributing to multidisciplinary research efforts. Preferred Qualifications: Expertise in gene regulation, epigenetics, CRISPR biology, and/or gene therapy. Extensive experience analyzing high-throughput sequencing data, including CRISPR screens, RNA-seq, ATAC-seq, ChIP-seq, WGBS, scRNA-seq, and similar datasets. Experience working with cloud computing platforms such as AWS for large-scale data analysis. Familiarity with GitHub for version control and collaborative code development. Theoretical and practical knowledge of machine learning and deep learning methodologies for biological data analysis. Familiarity with lentiviral vector design, packaging, transduction, and the creation of stable cell lines. Compensation: The salary range for this position is $90,000 to $120,000 USD annually. This salary range is an estimate, and the actual salary may vary based on various factors, including, without limitation, individual education, experience, tenure, skills, and abilities, as well as internal equity and alignment with market data, including potential adjustments for geographic location. Epicrispr Biotechnologies is an early-stage biotechnology company developing a novel technology platform that can provide safe and persistent control of targeted gene regulation. Our proprietary platform represents an entirely new class of therapeutics that can be leveraged to treat severe disease across numerous therapeutic areas, including complex diseases impacted by multiple genes. Epicrispr Biotechnologies provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.

Posted 30+ days ago

Morgan Stanley logo
Morgan StanleyPurchase, NY
Morgan Stanley is a leading global financial services firm providing a wide range of investment banking, securities, investment management and wealth management services. The Firm's employees serve clients worldwide including corporations, governments, and individuals from more than 1,200 offices in 43 countries. In this role, the AVP (Assistant Vice President) will support the U.S. Banks Data Quality Program for driving compliance with the Firm's Data Quality Policy and procedures. The AVP role is in the U.S. Banks First Line Business Risk team. This team is responsible for managing risks associated with Data Quality. This role requires regular interaction with other divisions across firm, technology, Bank Business partners, Bank Data Stakeholders, Compliance, Second Line Risk and Internal Audit. Primary Responsibilities Include: Executing Global Data Quality Policy requirements to ensure compliance of prioritized data for the US Banks. Collaborate with Firm Data Office, Divisional Data Offices and Technology partners to perform data flow analysis, assess data quality controls, identify data owners, catalog metadata in Collibra, and manage annual re-attestation. Create activities for technology delivery to develop dashboards, automate stakeholder communications, and provide data quality management solutions. Deliver educational sessions and empower stakeholders to perform their roles and responsibilities with available tools. Identify data related issues related to processes, technical issues, or controls impacting data Consumers. Monitor and report on data related issues, partner with issue owning division to ensure accuracy of issues reported and timely remediation. Monitor data quality controls and report to stakeholders for timely review and resolution. Develop metrics and dashboards for periodic management committee reporting. Document and maintain procedures and desktop procedures. Qualifications Advanced proficiency in Microsoft Excel, PowerPoint, Outlook, Word, Visio and other MS Office Tools. Minimum 5+ years of experience in financial services. Bachelor's Degree required. Experience working in fast paced and Agile environment. Strong analytical skills with a focus on remediating data issues. Experience with data management and governance, data quality, data analysis, data lineage processes are a distinct plus. Ability to collaborate with cross-functional stakeholders. Strong Attention to detail and the ability to handle, prioritize multiple tasks and projects concurrently. Highly motivated / self-starter with a sense of accountability & ownership, willingness to learn, and ability to work independently. Excellent written and verbal communication skills. Experience performing business analysis, writing business requirement document, and creating mock dashboards. WHAT YOU CAN EXPECT FROM MORGAN STANLEY: We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren't just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you'll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There's also ample opportunity to move about the business for those who show passion and grit in their work. To learn more about our offices across the globe, please copy and paste https://www.morganstanley.com/about-us/global-offices into your browser. Expected base pay rates for the role will be between $85,000 and $140,000 per year at the commencement of employment. However, base pay if hired will be determined on an individualized basis and is only part of the total compensation package, which, depending on the position, may also include commission earnings, incentive compensation, discretionary bonuses, other short and long-term incentive packages, and other Morgan Stanley sponsored benefit programs. Morgan Stanley's goal is to build and maintain a workforce that is diverse in experience and background but uniform in reflecting our standards of integrity and excellence. Consequently, our recruiting efforts reflect our desire to attract and retain the best and brightest from all talent pools. We want to be the first choice for prospective employees. It is the policy of the Firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, sex stereotype, gender, gender identity or expression, transgender, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy, veteran or military service status, genetic information, or any other characteristic protected by law. Morgan Stanley is an equal opportunity employer committed to diversifying its workforce (M/F/Disability/Vet).

Posted 30+ days ago

Pacific Life logo
Pacific LifeNewport Beach, CA
Job Description: We're actively seeking a talented Data Engineer to join our Pacific Life Team in Newport Beach, CA. We are looking for self-starters to help shape the future of data engineering and drive data-driven success. As a Data Engineer you'll move Pacific Life, and your career forward by leading the design and delivery of data engineering products that enable our Finance business teams to be more data-driven. These data solutions will capture, manage, store and utilize structured and semi-structured data from internal and external sources including Finance, Sales, Underwriting, Policy Admin, Claim and other core business areas. How you'll help move us forward: Partnering with data architects, analysts, engineers, and business stakeholders to understand data requirements and deliver solutions in an effective manner. Build scalable & reliable solutions with robust security, quality, performance and governance protocols Deliver solutions aligned with modern target state architecture and cloud-based technologies Create and maintain design artifacts that support data engineering solutions Lead creation and maintenance of automated and scalable test, build and deploy workflows aligned with modern CI/CD practices Promoting a culture of continuous improvement and agile methodologies. Ensure data engineering standards and best practices are followed Demonstrate adaptability, initiative and attention to detail through deliverables and ways of working The experience you bring: Bachelor's degree in computer science, information systems, mathematics, analytics or related field. 4+ years of experience in analysis, design, development, and delivery of data 4+ years of experience and proficiency in SQL, ETL, ELT, leading cloud data warehouse technologies, and data management tools Understanding data engineering best practices and data integration patterns 2+ years of experience with DevOps and CI/CD 1+ years of experience (not just POC) in using Git and Python Experience in agile methodologies. Effective communication & facilitation; both verbal and written Team-Oriented: Collaborating effectively with team and stakeholders Analytical Skills: Strong problem-solving skills with ability to break down complex data solutions What makes you stand out: 1+ years of experience in Snowflake, Data Build Tool (DBT), Matillion Experience with automation, optimization and innovation in data management and batch cycle environments Understanding of data catalogs, glossary, data quality, and effective data governance Financial services domain knowledge Data driven individual with ability to set up effective processes in your sphere of ownership Strong communication with the ability to translate business requirements into technical specifications Experience delivering Finance domain data engineering solutions Experience with Control M orchestration tool You can be who you are. People come first here. We're committed to an inclusive workforce. Learn more about how we create a welcoming work environment at www.pacificlife.com. What's life like at Pacific Life? Visit Instagram.com/lifeatpacificlife. Benefits start Day 1. Your wellbeing is important. We're committed to providing flexible benefits that you can tailor to meet your needs. Whether you are focusing on your physical, financial, emotional, or social wellbeing, we've got you covered. Prioritization of your health and well-being including Medical, Dental, Vision, and a Wellbeing Reimbursement Account that can be used on yourself or your eligible dependents Generous paid time off options including Paid Time Off, Holiday Schedules, and Financial Planning Time Off Paid Parental Leave as well as an Adoption Assistance Program Competitive 401k savings plan with company match and an additional contribution regardless of participation. Base Pay Range: The base pay range noted represents the company's good faith minimum and maximum range for this role at the time of posting. The actual compensation offered to a candidate will be dependent upon several factors, including but not limited to experience, qualifications and geographic location. Also, most employees are eligible for additional incentive pay. $100,530.00 - $122,870.00 Your Benefits Start Day 1 Your wellbeing is important to Pacific Life, and we're committed to providing you with flexible benefits that you can tailor to meet your needs. Whether you are focusing on your physical, financial, emotional, or social wellbeing, we've got you covered. Prioritization of your health and well-being including Medical, Dental, Vision, and Wellbeing Reimbursement Account that can be used on yourself or your eligible dependents Generous paid time off options including: Paid Time Off, Holiday Schedules, and Financial Planning Time Off Paid Parental Leave as well as an Adoption Assistance Program Competitive 401k savings plan with company match and an additional contribution regardless of participation EEO Statement: Pacific Life Insurance Company is an Equal Opportunity /Affirmative Action Employer, M/F/D/V. If you are a qualified individual with a disability or a disabled veteran, you have the right to request an accommodation if you are unable or limited in your ability to use or access our career center as a result of your disability. To request an accommodation, contact a Human Resources Representative at Pacific Life Insurance Company.

Posted 3 weeks ago

Five Below logo
Five BelowPhiladelphia, Pennsylvania
At Five Below our growth is a result of the people who embrace our purpose: We know life is way better when you are free to Let Go & Have Fun in an amazing experience, filled with unlimited possibilities, priced so low, you can always say yes to the newest, coolest stuff! Just ask any of our over 20,000 associates who work at Five Below and they’ll tell you there’s no other place like it. It all starts with our purpose and then, The Five Below Way, which is our values and behaviors that each and every associate believes in. It’s all about culture at Five Below, making this a place that can inspire you as much as you inspire us with big ideas, super energy, passion, and the ability to make the workplace a WOWplace! BRAND DESCRIPTION: At Five Below our growth is a result of the people who embrace our purpose: We know life is way better when you are free to Let Go & Have Fun in an amazing experience, filled with unlimited possibilities, priced so low, you can always say yes to the newest, coolest stuff! Just ask any of our over 20 ,000 associates who work at Five Below and they’ll tell you there’s no other place like it. It all starts with our purpose and then, The Five Below Way, which is our values and behaviors that each and every associate believes in. It’s all about culture at Five Below, making this a place that can inspire you as much as you inspire us with big ideas , super energy, passion, and the ability to make the workplace a WOWplace ! Position Overview We are seeking a Senior Software Data Engineer to lead the design, development, and optimization of our Enterprise Data Platform (EDP) and Customer Data Platform (CDP) using Azure and Azure Databricks. This role will architect metadata-driven data solutions, lead a team of data engineers, and collaborate with cross-functional stakeholders to deliver scalable, secure, and efficient data infrastructure. The Senior Data Engineer will take a holistic approach , ensuring data pipelines, governance, and analytics align with enterprise and customer-centric objectives in an Azure environment. Key Responsibilities Leadership: Mentor and lead a team of data engineers, fostering innovation, collaboration, and technical excellence while overseeing project delivery and resource allocation. Enterprise Data Platform Development: Architect and maintain a robust EDP on Azure to centralize data assets, enabling enterprise-wide analytics, reporting, and insights generation. Customer Data Platform Development: Design and implement a CDP on Azure to unify customer data, enabling personalized experiences, segmentation, and advanced analytics. Metadata-Driven Frameworks: Build scalable, metadata-driven frameworks in Azure Databricks for data ingestion, transformation, and orchestration, ensuring flexibility and reusability. Data Pipelines: Develop and optimize end-to-end data pipelines using Azure Databricks, Spark, and Apache Kafka for batch and real-time processing to support EDP and CDP use cases. Azure Cloud Integration: Deploy and manage cloud-native data solutions on Azure, integrating seamlessly with Azure Databricks, Azure Data Lake Storage and other Azure services. Data Governance & Compliance: Implement metadata management, data cataloging, and lineage tracking using Databricks Catalog to ensure data quality, security, and regulatory compliance (e.g., GDPR, CCPA). Database Management: Design and manage scalable databases (e.g., lakebase ) for efficient storage, retrieval, and querying of enterprise and customer data. Orchestration: Leverage Apache Airflow Workflows to orchestrate complex data pipelines reliably and at scale. Performance Optimization: Enhance pipeline performance, query efficiency, and cost optimization for large-scale data processing in Azure environments. Collaboration: Partner with product managers, data scientists, analysts, and business stakeholders to align data solutions with enterprise and customer goals. Innovation: Stay abreast of emerging Azure data technologies and trends, driving adoption of best practices and tools to enhance EDP and CDP capabilities. Design and implement ad-hoc and automated data pipelines with a strong focus on encryption and decryption requirements. Collaborate with external vendors to ensure secure data exchange by verifying keys, service accounts, and repository credentials (e.g., SFTP, S3 buckets). Manage high-volume data transmissions to advertising vendors, including encryption, file parsing, and ensuring end-to-end delivery success. Provide technical guidance to vendors, often coaching them through secure transmission protocols despite their role as data brokers. Rapidly deploy and harden transmission solutions to meet compliance standards under tight timelines. Demonstrate expertise or quick learning ability in file-level and record-level encryption/decryption, as well as hashing algorithms at the data element level. Required Qualifications Experience: 8 -10+ years in data engineering, with 4+ years focused on enterprise-scale data platforms and/or customer data platforms, including 3+ years in a lead or senior role managing teams. Leadership: Proven ability to lead, mentor, and inspire data engineering teams, with experience in agile project management and cross-functional collaboration. Azure Databricks Expertise: Advanced proficiency in Azure Databricks, Delta Lake, Spark, and Databricks Workflows for building scalable data solutions. Big Data Technologies: Deep expertise in Spark, Apache Kafka, and related big data technologies for large-scale and real-time data processing. Database Management: Strong skills in designing and managing MongoDB, Delta Lake, or similar databases for enterprise and customer data use cases. Orchestration: Hands-on experience with Apache Airflow workflows for pipeline orchestration. Metadata & Governance: Expertise in metadata management, data cataloging, and lineage tools to ensure data quality and compliance. Programming Skills: Proficient in Python, PySpark , Spark SQL, and SQL for developing robust data pipelines and transformations. SQL Mastery: Advanced skills in query optimization and performance tuning for large-scale datasets in Azure data lakes and warehouses. Data Modeling: Strong knowledge of data modeling techniques, including star schema, Snowflake schema, and data lake/warehouse architectures. Problem-Solving: Exceptional analytical skills with a solution-oriented mindset to tackle complex data challenges. Communication: Outstanding verbal and written communication skills to collaborate with technical and non-technical stakeholders. Preferred Qualifications Experience building or managing Customer Data Platforms (CDP) for marketing, personalization, or customer analytics use cases on Azure. Familiarity with data privacy frameworks and compliance standards (e.g., GDPR, CCPA). Exposure to machine learning pipelines or data science workflows integrated with EDP/CDP on Azure. Knowledge of hybrid or multi-cloud environments involving Azure and other platforms. Five Below is an Equal Opportunity Employer Explore our benefits site to discover all the perks and support we offer! From health coverage to financial and personal wellness, we've got you covered—check it out today! benefits.fivebelow.com/public/welcome Five Below is an Equal Opportunity Employer . All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, age, national origin, disability, protected veteran status, gender identity or any other factor protected by applicable federal, state, or local laws. Five Below is committed to working with and providing reasonable accommodations for individuals with disabilities . If you need a reasonable accommodation because of a disability for any part of the employment process, please submit a request and let us know the nature of your request and your contact information. crewservices.zendesk.com/hc/en-us/requests/new BE AWARE OF FRAUD! Please be aware of potentially fraudulent job postings or suspicious recruiter activity by persons that are posing as a Five Below recruiters. Please confirm that the person you are working with has an @fivebelow.com email address. Additionally, Five Below does NOT request financial information or payments from candidates at any point during the hiring process. If you suspect fraudulent activity, please visit Five Below's Career Site to verify the posting. fivebelow.com/info/careers

Posted today

Marsh & Mclennan Companies, Inc. logo
Marsh & Mclennan Companies, Inc.Chicago, IL
Boston, Chicago, New York, Dallas, Toronto, Montreal Lead Data Scientist ____ WHO WE ARE Oliver Wyman is a global leader in management consulting. With offices in 50+ cities across 30 countries, Oliver Wyman combines deep industry knowledge with specialized expertise in strategy, finance, operations, technology, risk management, and organizational transformation. Our 4000+ professionals help clients optimize their business, improve their IT, operations, and risk profile, and accelerate their organizational performance to seize the most attractive opportunities. Our professionals see what others don't, challenge conventional thinking, and consistently deliver innovative, customized solutions. As a result, we have a tangible impact on clients' top and bottom lines. Our clients are the CEOs and executive teams of the top Global 1000 companies. ____ PRACTICE OVERVIEW At Oliver Wyman Digital we partner with clients to deliver breakthrough outcomes for their toughest digital challenges. We blend the power of digital technology with deep industry expertise to tackle disruption and create impact. By building strong capabilities and culture, we accelerate and embed digital transformation. Our people co-create and grow customer-focused solutions that win. We modernize technology and harness value from data and analytics. We build resilience so our clients are ready for tomorrow's risks and can optimize operations for the future. Above all, we work collaboratively with our clients' leaders, employees, stakeholders, and customers to jointly define, design, and achieve lasting results. ____ THE ROLE AND RESPONSIBILITIES Our clients drive our projects - and no two OW Digital projects are the same. You'll be working with varied and diverse teams to deliver unique and unprecedented products across industries. As a Lead Data Scientist, you are primarily responsible for managing technical projects, including data engineering, model selection and design, and infrastructure deployment in both internal and client environments. We want and expect our people to develop deep expertise in a particular industry (financial services, health and life sciences, etc.), but you should be comfortable developing methods and selecting approaches based on a combination of first principles thinking, curiosity, and your pre-built foundations of software engineering and development. As a Data Scientist, you will work alongside Oliver Wyman partners in the Digital and other practice groups, engage directly with clients to understand their business challenges, and craft appropriate solutions to be delivered through collaboration with other OW Digital specialists and consultants. Your responsibilities will include: Exploring data, building models, and evaluating solution performance to resolve core business problems Explaining, refining, and collaborating with stakeholders through the journey of model building Keeping up with your domain's state of the art & developing familiarity with emerging modelling and data engineering methodologies Advocating application of best practices in modelling, code hygiene and data engineering Leading the development of proprietary statistical techniques, algorithms or analytical tools on projects and asset development Working with Partners and Principals to shape proposals that leverage our data science and engineering capabilities ____ YOUR EXPERIENCE & QUALIFICATIONS You are a well-rounded technologist who brings a wealth of real-world experience and: Technical background in computer science, data science, machine learning, artificial intelligence, statistics, or other quantitative and computational science Compelling track record of designing and deploying large-scale technical solutions, which deliver tangible, ongoing value including: Building and deploying robust, complex production systems that implement modern data science methods at scale, including supervised learning (regression and classification with linear and non-linear methods) and unsupervised learning (clustering, matrix factorization methods, outlier detection, etc.) Leveraging cloud-based infrastructure-as-code (CloudFormation, Bicep, Terraform, etc.) to minimize deployment toil and enabling solutions to be deployed across environments quickly and repeatably Demonstrating comfort and poise in environments where large projects are time-boxed, and therefore consequential design decisions may need to be made and acted upon rapidly Demonstrated fluency in modern programming languages for data science (i.e. at least Python, other expertise welcome), covering the full ML lifecycle (e.g. data storage, feature engineering, model persistence, model inference, and observability) using open-source libraries, including: Knowledge of one or more machine learning frameworks, including but not limited to: Scikit-Learn, TensorFlow, PyTorch, MxNet, ONNX, etc. Familiarity with the architecture, performance characteristics and limitations of modern storage and computational frameworks, with cloud-first considerations for Azure and AWS particularly welcome A history of compelling side projects or contributions to the Open-Source community is valued but not required Solid theoretical grounding in the mathematical core of the major ideas in data science: Deep understanding of a class of modelling or analytical techniques (e.g. Bayesian modeling, time-series forecasting, etc.) Fluency in the mathematical principles and generalizations of data science - e.g., Statistics, Linear Algebra and Vector Calculus Experience presenting at high-impact data science conferences and solid connections to the data science community (e.g., via meetups, continuing relationships with academics, etc.) is highly valued Interest/background in Financial Services, and capital markets in particular, Healthcare and Life Sciences, Consumer, Retail, Energy, or Transportation industries ____ YOUR ATTRIBUTES Our team comprises all sorts of people from all sorts of backgrounds. We don't care whether you're loud or quiet, funny, or serious, introverted or extroverted. We do, however, ask that you have: An undergraduate or advanced degree from a top academic program A genuine passion for technology and solving problems A pragmatic approach to solutioning and delivery Excellent communication skills, both verbal and written A clear commitment to creating impactful solutions that solve our clients' problems The ability to work fluidly and respectfully with our incredibly talented team Willingness to travel for targeted client and/or internal stakeholder meetings ____ OUR VALUES & CULTURE We're serious about making OW Digital a rewarding, enjoyable, and balanced place to work. Rewarding work We've worked hard to earn our reputation for high quality work. That reputation allows us to work with major brands at all levels on incredibly exciting projects. Combine that with Oliver Wyman's status as one of the Fortune 100 "Best Companies to Work For", and you get a rewarding combination of challenge, support, and recognition. Progressive employment Flat organizational structures, resolute I&D values, and a commitment to rewarding good work make for a progression path truly based on merit. A menu of healthcare options, 401k matching, and a culture of continuous improvement means your work gets more rewarding over time. Enjoyable days We want our team members to build a career here-and to be happy. That makes us serious about caring for, mentoring, developing, and sponsoring each other. This commitment also leads to opportunities for social impact and community work on company time. Balanced lives Our work is demanding, and we want you to have the best work-life balance you can. We'll work with you to accommodate your personal life with flexible hours and the ability to work from home. ____ HOW TO APPLY If you like what you've read, we'd love to hear from you! You can find this and other roles and submit your CV at https://careers.marshmclennan.com/global/en/oliver-wyman-search . And please include a short note introducing yourself and what you're looking for. The application process will include both technical testing and team fit interviews. Oliver Wyman is an equal opportunity employer. Our commitment to diversity is genuine, deep, and growing. We're not perfect, but we're working hard right now to make our teams balanced, representative and diverse. ____ _ ABOUT OLIVER WYMAN Oliver Wyman is a global leader in management consulting. With offices in more than 70 cities across 30 countries, Oliver Wyman combines deep industry knowledge with specialized expertise in strategy, operations, risk management, and organization transformation. The firm has more than 6,000 professionals around the world who work with clients to optimize their business, improve their operations and risk profile, and accelerate their organizational performance to seize the most attractive opportunities. Oliver Wyman is a business of Marsh McLennan [NYSE: MMC]. For more information, visit www.oliverwyman.com. Follow Oliver Wyman on Twitter @OliverWyman. Marsh McLennan and its Affiliates are EOE Minority/Female/Disability/Vet/Sexual Orientation/Gender Identity employers. The applicable base salary range for this role is $150,00 to $195,000. The base pay offered will be determined on factors such as experience, skills, training, location, certifications, and education, and any applicable minimum wage requirements. Decisions will be determined on a case-by-case basis. In addition to the base salary, this position may be eligible for performance-based incentives. We are excited to offer a competitive total rewards package which includes health and welfare benefits, tuition assistance, 401K savings and other retirement programs as well as employee assistance programs. Oliver Wyman, a business of Marsh McLennan (NYSE: MMC), is a management consulting firm combining deep industry knowledge with specialized expertise to help clients optimize their business, improve operations and accelerate performance. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit oliverwyman.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age background, disability, ethnic origin, family duties, gender orientation or expression, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, veteran status (including protected veterans), or any other characteristic protected by applicable law. If you have a need that requires accommodation, please let us know by contacting reasonableaccommodations@mmc.com. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one "anchor day" per week on which their full team will be together in person.

Posted 30+ days ago

Oscar Health Insurance logo
Oscar Health InsuranceNew York, NY
Hi, we're Oscar. We're hiring a Senior Data Scientist to join our Engineering team. Oscar is the first health insurance company built around a full stack technology platform and a focus on serving our members. We started Oscar in 2012 to create the kind of health insurance company we would want for ourselves-one that behaves like a doctor in the family. About the role Oscar is in a unique position to drive impact through high engagement with members and a variety of high-tech touchpoints (such as virtual care or superagent).The Senior Data Scientist develops personalized multi-channel marketing campaigns to influence behaviors that Oscar members take that are key to their health. You will ideate, scope business cases, design targeted pipelines by using predictive models, launch campaigns, monitor and optimize campaigns in real time to attribute best performing marketing variants, and measure campaigns. This will involve research, pipeline/ML model development, optimization techniques such (e.g. MAB), and measurement techniques (traditional statistical and casual machine learning). You work cross functionally with marketing and clinical teams, dedicated to a specific goal of lowering medical cost. You will have accountability for the success of each campaign. You will report to the Senior Director, Data Science. Work Location: Oscar is a blended work culture where everyone, regardless of work type or location, feels connected to their teammates, our culture and our mission. This is a hybrid role in our New York office. You will work part of the time in the office and part of the time remote / work-from-home. #LI-Hybrid Pay Transparency: The base pay for this role is: $158.400 - $207,900 per year. You are also eligible for employee benefits, participation in Oscar's unlimited vacation program, company equity grants, and annual performance bonuses. Responsibilities Works with team and manager to impact longer term strategies and roadmap Sets shorter term roadmap for team Interacts and collaborates closely with data and business counterparts across functional areas Researches, develops, and maintains data pipelines, statistical/ML models, and/or advanced analyses Compliance with all applicable laws and regulations Other duties as assigned Qualifications 4+ years of industry or other quantitative technical fields (which may include academia). 3+ years of work experience working with SQL, R, and/or Python to query, manipulate and analyze data 3+ years experience building data models, using more advanced analytics methods, statistical modeling, and/or data processing Bonus points Advanced degree in a quantitative or technical field Experience in the healthcare, finance and/or insurance industries Travel Up to 5% This is an authentic Oscar Health job opportunity. Learn more about how you can safeguard yourself from recruitment fraud here. At Oscar, being an Equal Opportunity Employer means more than upholding discrimination-free hiring practices. It means that we cultivate an environment where people can be their most authentic selves and find both belonging and support. We're on a mission to change health care -- an experience made whole by our unique backgrounds and perspectives. Pay Transparency: Final offer amounts, within the base pay set forth above, are determined by factors including your relevant skills, education, and experience. Full-time employees are eligible for benefits including: medical, dental, and vision benefits, 11 paid holidays, paid sick time, paid parental leave, 401(k) plan participation, life and disability insurance, and paid wellness time and reimbursements. Reasonable Accommodation: Oscar applicants are considered solely based on their qualifications, without regard to applicant's disability or need for accommodation. Any Oscar applicant who requires reasonable accommodations during the application process should contact the Oscar Benefits Team (accommodations@hioscar.com) to make the need for an accommodation known. Artificial Intelligence (AI) Guidelines: Please see our AI Guidelines for the acceptable use of artificial intelligence during the interview process at Oscar. California Residents: For information about our collection, use, and disclosure of applicants' personal information as well as applicants' rights over their personal information, please see our Notice to Job Applicants.

Posted 2 weeks ago

Acuity logo
AcuitySheboygan, WI
Acuity is seeking a Data Scientist Intern - Data Analytics to support data analytic insights and visualization using mathematical models, algorithms, machine learning techniques, and data analytics platforms. This position is located at our headquarters in Sheboygan, WI. Click here to learn more about the Acuity internship experience. Start Date: Summer 2026 ESSENTIAL FUNCTIONS: Conduct appropriate research and data collection. Summarize analysis and findings with accuracy and proper level of detail. Work with others to develop best solutions and accomplish goals. Regular and predictable attendance. Perform other duties as assigned. Follow department standards and procedures. EDUCATION: Currently enrolled in a formal University undergraduate degree program pursuing mathematics, actuarial science, or related field, and possessing high level of achievement, including academic grades. EXPERIENCE: Previous responsibilities demonstrating goal-orientated outcomes and teamwork. OTHER QUALIFICATIONS: Effective verbal and written communication skills. Proven mathematical, analytical, and problem-solving skills. Ability to work independently and on a team. Ability and desire to learn various actuarial techniques and procedures. Ability and desire to become familiar with company operations, actuarial methodologies and computer systems. Acuity does not sponsor applicants for U.S. work authorization.* This job is classified as non-exempt. We are an Equal Employment Opportunity employer. Applicants and employees are considered for positions and are evaluated without regard to mental or physical disability, race, color, religion, gender, national origin, age, genetic information, military or veteran status, sexual orientation, marital status or any other protected Federal, State/Province or Local status unrelated to the performance of the work involved. If you have a disability and require reasonable accommodations to apply or during the interview process, please contact our Talent Acquisition team at careers@acuity.com. Acuity is dedicated to offering reasonable accommodations during our recruitment process for qualified individuals.

Posted 1 week ago

Truveta logo
TruvetaSeattle, WA
Healthcare Data Analyst, Data Ecosystem Team Truveta is the world’s first health provider led data platform with a vision of Saving Lives with Data. Our mission is to enable researchers to find cures faster, empower every clinician to be an expert, and help families make the most informed decisions about their care. Achieving Truveta’ s ambitious vision requires an incredible team of talented and inspired people with a special combination of health, software and big data experience who share our company values . Truveta was born in the Pacific Northwest, but we have employees who live across the country. Our team enjoys the flexibility of a hybrid model and working from anywhere. In person attendance is required for one week during the year for Truveta Planning Week. For overall team productivity, we optimize meeting hours in the pacific time zone. We avoid scheduling recurring meetings that start after 3pm PT, however, ad hoc meetings occur between 8am-6pm Pacific time. #LI-remote Who We Need Truveta is rapidly building a talented and diverse team to tackle complex health and technical challenges. Beyond core capabilities, we are seeking problem solvers, passionate and collaborative teammates, and those willing to roll up their sleeves while making a difference. If you are interested in the opportunity to pursue purposeful work, join a mission-driven team, and build a rewarding career while having fun, Truveta may be the perfect fit for you. This Opportunity As part of the Ecosystem division, the newly formed Healthcare Analytics team is central to delivering on Truveta’s mission by empowering health system clinical and administrative leaders to measure, learn, and improve. We are building exemplary metrics, dashboards, and benchmarks that inspire adoption of Truveta by our member health systems. The Healthcare Analytics team is looking for a Healthcare Data Analyst who thrives at the intersection of EHR data expertise, rigorous analytics qualifications, and collaborative problem solving. You will play a critical role in creating high-quality analytic outputs that health systems can adopt and customize to improve care quality, population health, clinical operations, and financial sustainability, solving consequential problems in healthcare using an EHR database of ~120 million patients (and growing!), while positively impacting patient outcomes. This role is ideal for someone with hands-on experience working with EHR data, strong data wrangling skills, and a passion for turning data into meaningful insight that resonates with clinicians, health system executives, and operational leaders. As a Healthcare Data Analyst , you will have the opportunity to translate complex clinical and claims data into clear, defensible evidence that supports member initiatives in safety, quality, cost reduction, and growth. Responsibilities Develop iconic analytic outputs (studies, dashboards, benchmarks) that demonstrate Truveta’s unique value and inspire members to replicate, customize, and apply insights to address common, high-priority health system challenges. Wrangle large-scale healthcare datasets and build reproducible queries using SQL, R, and/or Python to scope analytic use cases, assess feasibility, and deliver studies and dashboards within agreed timelines, while developing subject matter expertise in Truveta’s proprietary coding language and analytics platform. Engage with clinical, quality, and operational leaders by delivering case studies, interactive demos, and analytic output that showcase Truveta’s differentiated capabilities and highlight how Truveta can impact healthcare’s mission and margin objectives. Collaborate closely with cross-functional teams to validate data quality, investigate issues, and provide feedback that informs Truveta’s product roadmap. Use AI thoughtfully and strategically to spark new ideas and tackle problems. Apply AI to speed feedback loops, test hypotheses, and deliver insights faster, while balancing judgment, creativity, and an awareness of its limitations. Required Skills Undergraduate or graduate (preferred) education in data analysis, clinical informatics, epidemiology, public health, or a related field. Experience working with large relational database consisting of millions of patients' records. Experience building dashboards, benchmarks, or metrics to achieve measurable improvement in health system operations, quality outcomes, or population health. 2+ years of experience wrangling and analyzing EHR data or other real-world data sources using SQL, R and Python. Knowledge of clinical terminologies such as ICD, SNOMED, LOINC, RxNorm, or NDC. A willingness to learn new coding languages including internal proprietary coding language to analyze data and build cohorts. Experience translating healthcare and operational concepts into analytic workflows. Strong communication skills to present insights and results to both technical and non-technical audiences. Ability to learn and adapt quickly in a dynamic start-up environment. Preferred Qualifications These qualifications are preferred but not required, please do not let them stop you from applying for this role. You will likely get the opportunity to learn how to do these more advanced analyses if you don’t already have experience with them. Experience working with unstructured clinical data, natural language processing outputs, or AI/ML tools Knowledge of distributed computing platforms (Spark) and associated data analysis languages (Spark SQL, PySpark, SparkR) Experience building cohort definitions, defining metrics, and interpreting analytic findings Why Truveta? Be a part of building something special. Now is the perfect time to join Truveta. We have strong, established leadership with decades of success. We are well-funded. We are building a culture that prioritizes people and their passions across personal, professional and everything in between. Join us as we build an amazing company together. We Offer: Interesting and meaningful work for every career stage Great benefits package Comprehensive benefits with strong medical, dental and vision insurance plans 401K plan Professional development & training opportunities for continuous learning Work/life autonomy via flexible work hours and flexible paid time off Generous parental leave Regular team activities (virtual and in-person as soon as we are able) The base pay for this position is $105,000 to $130,000. The pay range reflects the minimum and maximum target. Pay is based on several factors including location and may vary depending on job-related knowledge, skills, and experience. Certain roles are eligible for additional compensation such as incentive pay and stock options. If you are based in California, we encourage you to read this important information for California residents linked here. Truveta is committed to creating a diverse, inclusive, and empowering workplace. We believe that having employees, interns, and contractors with diverse backgrounds enables Truveta to better meet our mission and serve patients and health communities around the world. We recognize that opportunities in technology historically excluded and continue to disproportionately exclude Black and Indigenous people, people of color, people from working class backgrounds, people with disabilities, and LGBTQIA+ people. We strongly encourage individuals with these identities to apply even if you don’t meet all of the requirements. Please note that all applicants must be authorized to work in the United States for any employer as we are unable to sponsor work visas or permits (e.g. F-1 OPT, H1-B) at this time. We appreciate your interest in the position and encourage you to explore future opportunities with us.

Posted 4 days ago

C3 AI logo
C3 AIRedwood City, CA
C3 AI (NYSE: AI), is the Enterprise AI application software company. C3 AI delivers a family of fully integrated products including the C3 Agentic AI Platform, an end-to-end platform for developing, deploying, and operating enterprise AI applications, C3 AI applications, a portfolio of industry-specific SaaS enterprise AI applications that enable the digital transformation of organizations globally, and C3 Generative AI, a suite of domain-specific generative AI offerings for the enterprise. Learn more at: C3 AI As a Data Scientist / Senior Data Scientist -- Optimization, you will work with some of the world’s largest companies to design and deliver the next generation of AI-powered enterprise applications, where optimization solutions play a critical role. Our team focuses on developing scalable, explainable optimization models and algorithms tailored to diverse industry verticals, including supply chain, manufacturing, food processing, agriculture, health care, and more. You will work alongside peer data scientists, software engineers, subject matter experts and business stakeholders to deliver the end-to-end optimization solutions that drive measurable business value and enable digital transformation for our customers. Your responsibilities will span the entire solution lifecycle, including: understanding business requirements and translating them into optimization problems, analyzing historical data to uncover insights and guide solution design, developing advanced optimization models and algorithms to address complex business challenges, building prototypes and products to validate solutions and demonstrate value, deploying production-grade solutions on the C3 AI Suite, ensuring scalability and robustness in real-world operations. C3 AI Data Scientists are equipped with modern development tools, IDEs, and AI agents to maximize productivity and accelerate solution delivery. Qualified candidates should possess deep expertise in operations research and optimization, along with a solid understanding of common machine learning techniques and their practical applications. Candidates with extensive experience may be considered for more senior positions within this category. Responsibilities: Provide thought leadership and expertise in optimization, guiding strategic decision-making and technical direction. Research, design, implement, and deploy optimization solutions for enterprise applications leveraging the C3 AI Suite. Assist and enable C3 AI customers to build their own optimization applications on the C3 AI Suite. Collaborate with or lead a small technical team on a project and identify potential risks and implement mitigation strategies to ensure project success. Partner with cross-functional team to translate optimization model insights into actionable business strategies and measurable outcomes. Develop, maintain and enhance optimization frameworks, libraries, and tools to ensure scalability and efficiency while contributing to the continuous improvement of the C3 AI Suite. Stay informed on state-of-the-art optimization techniques, promote best practices, and foster an innovative and collaborative work environment at C3 AI. Qualifications: MS or PhD in Operations Research, Applied Mathematics, Computer Science, Artificial Intelligence, Industrial Engineering or equivalent. Deep understanding of optimization (constrained, stochastic, convex and non-convex optimization problems, and LP, QP, MILP, MINLP, problems and solvers). Strong mathematical background (linear algebra, calculus, statistics, numerical simulation). Demonstrated expertise in Python (or a similar object-oriented programming language), with hands-on experience in at least one mathematical programming library or framework. Ability to drive a project and work both independently and in a team. Smart, motivated, can-do attitude, and seeks to make a difference. Curiosity and willingness to learn about our customers industries. Excellent verbal and written communication in English. Work will be conducted at our office in Redwood City, California Preferred Qualifications: Professional experience applying optimization in a customer facing role. Practical experience with at least one type of commercial solver (e.g., Gurobi). Knowledge of git and experience with GenAI productivity tools. Experience with applied/scalable ML, state-of-art deep learning and reinforcement learning problem formulation and model architecture design. A portfolio of projects (GitHub, etc.) and publications at top-tier peer-reviewed conferences or journals is a plus. Experience working with modern IDEs and AI agent tools as part of accelerated development workflows. C3 AI provides excellent benefits, a competitive compensation package and generous equity plan. California Base Pay Range $123,000 — $185,000 USD C3 AI is proud to be an Equal Opportunity and Affirmative Action Employer. We do not discriminate on the basis of any legally protected characteristics, including disabled and veteran status.

Posted 3 weeks ago

Illinois Secretary of State logo
Illinois Secretary of StateChicago, IL
Office of the Illinois Secretary of State Alexi Giannoulias Job Title:      Chief Data Officer (Data Systems Administrator) Division:        Administration Union:            N/A Location:        115 S LaSalle St., Chicago, IL - Cook County Salary:           Range $6,904 to $13,522 monthly – commensurate with experience Benefits:         https://cms.illinois.gov/benefits/stateemployee.html Overview: Leads and directs a statewide enterprise data program through subordinate supervisory staff, overseeing technical, professional, and analytical teams responsible for data governance, data architecture, data quality, and enterprise data management activities. This role is responsible for establishing a vision and strategy for data across the organization, ensuring its alignment with business goals, compliance requirements, and enterprise architecture standards. Duties and Responsibilities: Program Leadership and Oversight - Through subordinate supervisors, plans, organizes, directs, and evaluates the work of data analysts, data stewards, architects, and technical staff supporting enterprise data initiatives, including data quality assessment, metadata management, master data management, and governance policy enforcement. Participate in hiring and workforce planning activities for the Enterprise Data Management team. Enterprise Data Governance Frameworks and Standards - Develops and implements procedures, policies, and governance frameworks for enterprise data use across all departments. Establishes standards for data stewardship, metadata, lineage, and classification. Approves or refines recommendations from subordinate staff regarding department-specific data governance implementation, ensuring consistency and scalability across the organization. Strategic Planning and Policy Development - Analyzes and recommends data strategy enhancements and enterprise-wide data management policies to the Department Director. Leads the creation of long-range data plans that support enterprise goals and digital transformation efforts. Identifies data architecture enhancements, oversees enterprise data integration planning, and ensures alignment with statewide technology initiatives. Consultation and Advisory Services - Serves as the enterprise advisor to division managers statewide regarding data strategy, governance adoption, data-driven decision-making, and compliance with regulatory and internal standards. Provides guidance on organizational readiness for advanced analytics, data modernization efforts, and new data technologies. Budgetary and Investment Planning - Provides expert consultation on the development of proposed budgets related to enterprise data systems, platforms, and tools. Reviews and advises on strategic investments in data technology and services, ensuring cost-effectiveness and alignment with data strategy and business needs. Stakeholder Engagement and Communication - Maintains ongoing communication and engagement with department leaders across all divisions to advocate for enterprise data initiatives and educate teams on data governance practices. Leads forums, working groups, and training initiatives to foster a data-literate culture. Maintains contact with external data platform vendors, regulatory bodies, and industry peers to stay informed of data management advancements and opportunities for the Office of the Secretary of State. Performs other duties as required or assigned. Education and Experience: Requires knowledge, skill and mental development equivalent to completion of four years college preferably with courses in computer science, management information systems, mathematics or statistics supplemented by a master's degree in computer science or a related field and two years of managerial experience in a computer-based management information system. Knowledge, Skills and Abilities: Requires thorough knowledge of management information system, data base administration, systems analysis and computer communication systems. Requires thorough knowledge of the principles of organization, management and administration. Requires extensive knowledge of research and statistical procedures, program budgeting and systems modeling. Requires ability to implement program evaluations or system assurance. Requires ability to anticipate and resolve managerial problems. Requires willingness to travel and possession of a valid Illinois driver’s license as required by individual positions within the class. Requires the ability to lift, carry, and push/pull 0-50 lbs. Application Process:  Please visit  https://ilsos.applytojob.gov/apply  to apply by completing the online application; you may also upload a resume or other attachments as needed . Preference will be given to Illinois residents in the hiring and selection process, in accordance with the Illinois Secretary of State Merit Employment Code. Questions regarding this posting or Illinois Secretary of State employment practices may be directed to Job Counselors at our Personnel offices in Chicago (312-793-5515) or Springfield (217-782-4783).     Equal Employment Opportunity Employer. Applicants must be lawfully authorized to work in the United States. Applicants are considered for all positions without regard to race, color, religion, sex, national origin, sexual orientation, age, marital or veteran status, or the presence of a non-job-related medical condition or disability. Powered by JazzHR

Posted 30+ days ago

T logo
Two95 International Inc.Marlborough, MA
Title: IBM Data Stage (Big Data Edition) Location: Marlborough, MA Duration: Full Time Salary: $Market Requirements Key skills required for the job are: IBM Data Stage Big Data Edition-L3 (Mandatory) Data Stage Admin-L3 IBM Info sphere CDC-L3 As a Lead Administrator, you should be able to act as a Single point of contact for the technical tower in front of the customer management. Ensure proper communication and quick resolution as a crisis manager. Responsible for Vendor Management and people management. Drives day to day operations and work plan allocation/management. Conduct periodic reviews with teams. Weekly and monthly status reports to higher management. Participate in business meetings with various stake holders. Take corrective actions based on the customer satisfaction surveys. Drive service improvement programs. Ensure adherence to quality / security standards defined for the engagement Perform Trend analysis, identify top few incidents and work with respective teams/individual to minimize the incidents. Effort estimation/reviews on need basis for new projects. Minimum work experience:5 - 8 Years Benefits Note: If interested please send your updated resume to sagar.chand@two95intl.com and include your rate requirement along with your contact details with a suitable time when we can reach you. If you know of anyone in your sphere of contacts, who would be a perfect match for this job then, we would appreciate if you can forward this posting to them with a copy to us. We look forward to hearing from you at the earliest!

Posted 30+ days ago

AQMetrics logo
AQMetricsNew York, NY
AQMetrics is looking for a Data Integration and Data Operations Analyst to work with customers and cross functional teams at AQMetrics to continuously improve our data management capabilities in our SaaS platform. This is an opportunity to make a difference in a growing company. We are looking for an entrepreneurial individual who thrives on challenges and is looking for an opportunity to take AQMetrics data management products to the next level. Who We Are: AQMetrics is a leading provider of regulatory risk software to global financial institutions. Our SaaS platform is award-winning, and our range of products make regulatory risk management simple, secure, and globally compliant. What We Value: People First Putting people first is at the core of everything we do. It compels us to make decisions based on what is best for our people — employees, partners, and customers. Customer Delight We are customer focused. We strive to provide the best in class, to drive great customer experience through teamwork and high performance. Integrity We are committed to professional integrity. We conduct our business to the highest standards with skill, diligence and responsibility. Professional trust, honesty and compliance are at the core of our culture. Innovation We value ideas and encourage innovation every day. What you will do: Designing and Developing: Building and maintaining data pipelines that extract, transform, and load (ETL) data from various sources Data Integration and API Development : Implement integrations with internal and external data sources and APIs Collaborating with Teams: Working with cross-functional teams to gather data requirements and understand data sources Ensuring Data Quality: Implementing data quality checks and validation processes to maintain data integrity Monitoring and Troubleshooting: Overseeing data integration workflows to resolve issues and optimize performance Documenting Processes: Keeping detailed documentation of data integration processes, data flows, and system configurations Troubleshooting and Optimization: Maintaining the AQMetrics database Staying Informed: Keeping up with industry trends and best practices in data integration and management Providing Support and Training: Assisting end-users with data integration Requirements 1-3 years in a Data Analyst or Engineer role preferred Experience working collaboratively with application development teams to implement data mapping, data management and data integration solutions Good knowledge of database and data modelling techniques, and SQL proficiency. Experience troubleshooting databases and data issues using SQL Strong critical thinking skills and ability to analyze complex datasets Prioritization skills Focus on continuous improvement Storytelling skills. You can communicate your insights to your team, cross functionally and to Customers Linux experience and Shell scripting Experience with Amazon AWS and RDS BSc in Computer Science or Engineering degree, related technical discipline, or equivalent work experience Nice to have experience with data visualization tools Three days of in-office work per week is required Benefits Opportunity to be part of something special, AQMetrics is growing fast, and we want you to be part of our journey Remote Work People-centric culture Competitive salary 25 Days annual leave Health Insurance 401K Excellent holiday allowance Upskilling opportunities Flexible working What our Interview Process is like: Step 1- After you apply, our HR Manager may reach out to you for an introductory call Step 2- If your background is a match for the role, you may be required to complete a technical assessment (role depended) and/or phone interview with 1-2 people Step 3- If you continue through the process, you may be asked to come onsite to interview AQMetrics is an equal-opportunity employer. We are committed to an inclusive and diverse AQMetrics.

Posted 30+ days ago

Node.Digital logo
Node.DigitalWashington, DC
ETL/Data Engineer/Data Architect Location: Remote work Must be a U.S. citizen OR Permanent Resident Alien (Green card holder) Job Description Node is currently seeking a motivated, career and customer-oriented Senior Data Architect to begin an exciting and challenging career with our large Enterprise Application Support Program on one of our project delivery teams. Job Responsibilities ·         Design and implement effective database structures and models to store, retrieve, and analyze data. ·         Develop, construct, test, and maintain scalable data pipelines to collect, process, and integrate data from various sources. ·         Implement ETL (Extract, Transform, Load) processes to ensure data consistency and quality. ·         Integrate data from different sources, ensuring consistency, reliability, and accuracy. ·         Develop data APIs and automation scripts to streamline data integration and workflows. ·         Monitor and optimize database and data processing system performance. ·         Conduct performance tuning and troubleshoot data issues. Requirements Required: ·         Bachelor's degree in Computer Science, Management Information Systems, or relevant discipline (4 years of equivalent experience) ·         8+ years' experience with: o   Proven experience as a Data Architect, Data Engineer, or in a similar role. o   Extensive experience in designing and implementing data architectures. Hands-on experience in developing and managing data pipelines and ETL processes. Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, SQL Server). o   Experience with big data technologies (e.g., Hadoop, Spark) and ETL tools. o   Strong programming skills in languages such as Python, Java, or Scala. Security Clearance Requirements ·         Must be a U.S. citizen OR Permanent Resident Alien (Green card holder and NOT H1 Visa holder) ·         Ability to obtain an IRS MBI (Minimum Background Investigation) Security Clearance from the Federal Agency. ·         Active IRS MBI Clearance is highly desirable Company Overview: Node.Digital is an independent Digital Automation & Cognitive Engineering company that integrates best-of-breed technologies to accelerate business impact. Our Core Values help us in our mission. They include: OUR CORE VALUES Identifying the~RIGHT PEOPLE~and developing them to their full capabilities Our customer’s “Mission” is our “Mission”. Our~MISSION FIRST~approach is designed to keep our customers fully engaged while becoming their trusted partner We believe in~SIMPLIFYING~complex problems with a relentless focus on agile delivery excellence Our mantra is “~Simple*Secure*Speed~” in delivery of innovative services and solutions Benefits We are proud to offer competitive compensation and benefits packages to include Medical  Dental Vision Basic Life  Health Saving Account 401K Three weeks of PTO 10 Paid Holidays Pre-Approved Online Training

Posted 30+ days ago

D logo
DPRRaleigh, North Carolina
Job Description DPR is looking for an experienced AI Data Engineer to join our Data and AI team and work closely with the Data Platform, BI and Enterprise architecture teams to influence the technical direction of DPR’s AI initiatives.You will work closely with cross-functional teams, including business stakeholders, data engineers, and technical leads, to ensure alignment between business needs and data architecture and define data models for specific focus areas. Responsibilities Integrate the semantic layer to serve as an AI-ready knowledge base, enabling applications such as advanced analytics, prompt engineering for large language models, and intelligent data discovery while ensuring seamless connectivity and holistic data understanding across the enterprise. Develop standards, guidelines, and best practices for knowledge representation, semantic modeling, and data standardization across DPR to ensure a clear and consistent approach within the enterprise semantic layer. Establish and refine operational processes for semantic model development, including intake mechanisms for new requirements (e.g., from AI prompt engineering initiatives) and backlog management, ensuring efficient and iterative delivery. Partner closely with analytics engineers and data architects to deeply understand the underlying data models in Snowflake and develop a profound understanding of our business domains and data entities. Provide strategic guidance on how structured data can be seamlessly transformed, optimized, and semantically enriched for advanced AI consumption and traditional BI/Analytics tools. Lead the effort to establish and maintain comprehensive documentation for all aspects of the semantic layer, which includes defining and standardizing key business metrics, documenting ontological definitions, relationships, usage guidelines, and metadata for all semantic models, ensuring clarity, consistency, and ease of understanding for all data users. Evaluate and monitor the performance, quality, and usability of semantic systems, ensuring they meet organizational objectives, external standards, and the demands of AI applications. Act as a thought leader, constantly evaluating emerging trends in knowledge graphs, semantic AI, prompt engineering, and related technologies to strategically enhance DPR’s capabilities in knowledge representation and data understanding. Rapidly prototype high-priority solutions in cloud platforms, demonstrating their feasibility and business value. Participate in all phases of the project lifecycle and lead data architecture initiatives. Qualifications Proven expertise in data analysis, data modeling, and data engineering with a focus on cloud- native data platforms. 5+ years of hands-on experience in semantic modeling, ontology engineering, knowledge graphs, or related AI data preparation. 3+ years of experience developing data solutions specifically for AI/ML applications leveraging structured data. 3+ years of experience with data warehousing concepts, dimensional modeling, and data governance principles as they relate to structuring data for semantic enrichment. Proficiency in SQL and experience working with cloud-based data warehouses, preferably Snowflake. Strong analytical and problem-solving skills with keen attention to detail and the ability to translate complex business concepts into logical ontologies and knowledge graph structures. Strong proficiency in SQL, Python, and PySpark. Familiarity with agile methodologies, and experience working closely with cross functional teams to manage technical backlogs. Skilled in orchestrating and automating data pipelines within a DevOps framework. Strong communicator with the ability to present ideas clearly and influence stakeholders — with a passion for enabling data-driven transformation. DPR Construction is a forward-thinking, self-performing general contractor specializing in technically complex and sustainable projects for the advanced technology, life sciences, healthcare, higher education and commercial markets. Founded in 1990, DPR is a great story of entrepreneurial success as a private, employee-owned company that has grown into a multi-billion-dollar family of companies with offices around the world. Working at DPR, you'll have the chance to try new things, explore unique paths and shape your future. Here, we build opportunity together—by harnessing our talents, enabling curiosity and pursuing our collective ambition to make the best ideas happen. We are proud to be recognized as a great place to work by our talented teammates and leading news organizations like U.S. News and World Report, Forbes, Fast Company and Newsweek. Explore our open opportunities at www.dpr.com/careers .

Posted 1 week ago

D logo
DesignlibroSan Jose, California
Benefits: 401(k) 401(k) matching Dental insurance Health insurance Paid time off Vision insurance Wellness resources Responsibilities Design robust and scalable data system architecture to collect, process, and store large volumes of data from IoT devices. Design and build data systems that support real-time, customer facing data applications, ensuring data integrity and efficient retrieval processes. Design and build a data validation framework to guarantee the accuracy and consistency of data across all stages of the pipeline, ensuring reliable insights and decision-making. Implement data architectures to continuously improve AI models through automated feedback loops and data collection processes. Optimize data processing workflows for AIoT tasks. Collaborate with product owners to translate data requirements into actionable data engineering solutions. Collaborate with other engineers, product owners to solve challenging problems. Evangelize software engineering best practices and lead by example. Qualifications Bachelor's degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of proven experience in data engineering, with a strong focus on data warehousing and data system architecture. Expertise in design and build IoT data systems. Expertise in architecting that power high-volume consumer application. Expertise in big data and streaming technologies (Python, Java, Kafka, Spark,Spark Streaming). Expertise in moderen database technology (MongoDB, ElasticSearch, StarRocks, S3, Snowflake, Bigquery etc). Experience with data annotation tools and processes for machine learning datasets. Familiarity with MLOps practices and tools for managing AI model lifecycles. Knowledge about deploying systems into a production Cloud Native Environment (AWS or similar). Experience in Java Spring Framework is a plus Experience with video data handling and image processing is a plus Compensation: $120,000.00 - $180,000.00 per year Since 2019, Petlibro has designed products for the intertwined lives of pets and their people. From smart feeders with health insights to ultra-filtered automatic fountains, our award-winning products are engineered to magnify the bond between your pet and you. Designed better for your lives together

Posted 1 week ago

OneMain Financial logo
OneMain FinancialDallas, Texas
About Us: We are a forward-thinking financial services organization committed to delivering innovative digital solutions. Our data-driven approach powers our digital products, and we are scaling a shared data products organization to elevate our data platform capabilities and deliver actionable insights into digital behaviors and performance across the enterprise. Role Overview: We are seeking a seasoned Director of Data Products to lead and expand our shared data products organization within the digital product division. This strategic leadership role will have enterprise-wide impact, setting the long-term vision for the data platform, advancing analytics capabilities, and influencing digital strategy at the platform leadership level. You will build a team of data product managers and work cross-functionally with data engineering, product, design, and business leaders to deliver high-value data products and analytics solutions that drive business growth and optimize customer experiences. Key Responsibilities: Set Strategic Direction: Own the enterprise vision, strategy, and multi-year roadmap for the shared data platform and digital analytics products, ensuring alignment with company objectives and digital transformation goals. Executive Leadership & Team Development: Build, lead, and inspire a high-performing team of data product managers. Establish clear goals, provide coaching and career development, and foster a culture of collaboration, innovation, and excellence. Enterprise Data Platform Partnership: Partner with senior data engineering leadership to define and deliver a scalable, secure, and future-ready data platform that meets evolving business and regulatory requirements. Data Product Innovation: Lead the development of advanced analytics products that provide actionable visibility into customer behavior, product performance, and operational metrics across all digital channels. Data-Driven Culture: Champion the adoption of data-driven decision-making across the organization, enabling business units to leverage insights to optimize customer experience and product performance. Governance & Compliance: Establish and enforce enterprise-wide best practices for data governance, data quality, security, and compliance with financial services regulations. Stakeholder Engagement: Act as the senior point of contact for business leaders, technology executives, and external partners on shared data platform capabilities, priorities, and outcomes. Portfolio & Lifecycle Management: Oversee the full lifecycle of data products—from ideation and business case development through delivery, adoption, and continuous improvement—ensuring measurable business impact. Qualifications: 10+ years of experience in data product management, digital analytics, or related fields, with at least 5 years in a senior leadership role managing managers and large teams. Master’s or Bachelor’s degree in Business, Technology, or a related field. Demonstrated track record of delivering enterprise-scale data platform solutions and advanced analytics products, preferably in financial services or a regulated industry. Expertise in digital, customer journey, and cross-channel analytics. Deep knowledge of modern cloud-based data platforms (Snowflake, AWS, Azure, GCP), ETL pipelines, data lakes, and data warehouse architecture. Strong executive presence with the ability to influence senior leaders and drive strategic initiatives across complex organizations. Exceptional analytical, strategic thinking, and problem-solving skills, with the ability to translate insights into actionable business strategies. Experience in Agile product development environments. Thorough understanding of data governance frameworks, privacy regulations, and security best practices in financial services. Preferred Skills: Experience with leading digital analytics platforms (e.g., Heap, Google Analytics, Adobe Analytics) and BI tools (e.g., Tableau, Power BI). Master’s degree in business, data science, or a related field. Why Join Us: As a Director of Data Products at OneMain, you will play a pivotal role in shaping the data and analytics vision for the organization. You will lead initiatives that transform how we harness data to improve customer experiences, optimize operations, and deliver innovative digital products—impacting millions of customers nationwide. Location: This role is Hybrid. You should be located within a commutable distance to one of our offices located in Charlotte, NC, or Dallas/Fort Worth with expectations to be in the office Tuesday, Wednesday and Thursday. Salary: Target base salary range for Baltimore, MD and New York, NY is $170k-210k , which is based on various factors including skills and work experience. In addition to base salary, this role is eligible for a competitive compensation program that is based on individual and company performance. Who we Are OneMain Financial (NYSE: OMF) is the leader in offering nonprime customers responsible access to credit and is dedicated to improving the financial well-being of hardworking Americans. Since 1912, we’ve looked beyond credit scores to help people get the money they need today and reach their goals for tomorrow. Our growing suite of personal loans, credit cards and other products help people borrow better and work toward a brighter future. Driven collaborators and innovators, our team thrives on transformative digital thinking, customer-first energy and flexible work arrangements that grow lives, careers and our company. At every level, we’re committed to an inclusive culture, career development and impacting the communities where we live and work. Getting people to a better place has made us a better company for over a century. There’s never been a better time to shine with OneMain. Because team members at their best means OneMain at our best, we provide opportunities and benefits that make their health and careers a priority. That’s why we’ve packed our comprehensive benefits package for full- and some part-timers with: Health and wellbeing options for team members and their dependents Up to 4% matching 401(k) Employee Stock Purchase Plan (10% share discount) Tuition reimbursement Continuing education Bonus eligible Paid time off (15 days’ vacation per year, plus 2 personal days, prorated based on start date) Paid holidays (7 days per year, based on start date) Paid volunteer time (3 days per year, prorated based on start date) And more OneMain Holdings, Inc. is an Equal Employment Opportunity (EEO) employer. Qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship status, color, creed, culture, disability, ethnicity, gender, gender identity or expression, genetic information or history, marital status, military status, national origin, nationality, pregnancy, race, religion, sex, sexual orientation, socioeconomic status, transgender or on any other basis protected by law.

Posted 2 days ago

A logo
ArineSan Francisco, CA
The Role: As a key technical leader and team architect working in a fast-paced environment, you will drive the design, development, and optimization of scalable data ingestion pipelines within the Arine platform. Leveraging expert-level proficiency in Python and AWS, you will architect solutions that handle diverse file types and large-scale healthcare datasets. You will have a direct impact on building reusable, configurable tools set for handling data needs for the entire company. What You'll be Doing: Act as the team architect by leading system design reviews, offering recommendations, conducting comprehensive peer reviews, and demonstrating expert-level proficiency in Python and AWS services Architecting and implementing scalable data ingestion pipelines handling different file types into Arine platform Develop reusable components that can be integrated into data pipelines to enhance efficiency and minimize future implementation time Creating configuration-driven, containerized toolsets that can be easily used and maintained by diverse engineering profiles Work collaboratively with cross-functional teams to ensure their data requirements are met through ETL components Implementing incremental data ingestion strategies for large-scale healthcare datasets Building monitoring and alerting systems for data ingestion processes and pipeline health Applying software engineering best practices including test-driven development and modular design to data infrastructure Refactoring and rebuilding existing data ingestion processes to improve scalability and operational efficiency Working with containerization technologies (Docker, Kubernetes) to create portable and maintainable data solutions Identify and escalate inefficiencies within and across teams Provide technical guidance, mentorship to junior engineers, and promote best practices and coding standards Author and support high-quality technical documentation, assisting junior engineers in doing the same Who You Are and What You Bring: 10+ years of professional experience in data engineering with focus on large-scale data ingestion and infrastructure Deep expertise in Python programming and modern data engineering tools Experience creating an automated production grade ETL process using Python and SQL Strong understanding of ETL/ELT frameworks and distributed data processing Experience with data processing, validation, cleaning and debugging data sets Experience with API integration for seamless data exchange between systems Proven experience handling and processing various file types and formats, including specialized healthcare standards such as HL7, 834, 837, and NCPDP Experience integrating and consolidating data from diverse source systems into a unified repository, including data from EHR and claim systems, as well as from file-based and API integrations Experience with processing large data sets (over 10GB) Experience with incremental data processing and change data capture (CDC) methodologies Strong experience designing scalable data architectures in AWS environment Deep understanding of software engineering principles including test-driven development, loose coupling, single responsibility, and modular design Experience with containerization technologies (Docker, Kubernetes) and building configuration-driven, maintainable systems Proven ability to build tools and systems that can be operated by diverse engineering profiles through configuration rather than code changes Passion for building new and improving existing data infrastructure with robust, maintainable, and operationally excellent data systems Familiarity with healthcare data and regulatory environments (HIPAA compliance) is a plus Strong collaboration and communication skills; comfortable working with diverse technical and non-technical stakeholders Excellent verbal and written communication skills with ability to explain technical infrastructure concepts to diverse audiences Remote Work Requirements: An established private work area that ensures information privacy A stable high-speed internet connection for remote work This role is remote, but you will be required to come to on-site meetings multiple times per year. This may be in the interview process, onboarding, and team meetings Perks: Joining Arine offers you a dynamic role and the opportunity to contribute to the company's growth and shape its future. You'll have unparalleled learning and growth prospects, collaborating closely with experienced Clinicians, Engineers, Software Architects, Data Scientists, and Digital Health Entrepreneurs. The posted range represents the expected base salary for this position and does not include any other potential components of the compensation package, benefits, and perks. Ultimately, the final pay decision will consider factors such as your experience, job level, location, and other relevant job-related criteria. The base salary range for this position is: $165,000-180,000/year.

Posted 30+ days ago

California Life Company logo

Data Scientist / Senior Data Scientist, Statistical Genetics

California Life CompanySouth San Francisco, CA

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

Who We Are:

Calico (Calico Life Sciences LLC) is an Alphabet-founded research and development company whose mission is to harness advanced technologies and model systems to increase our understanding of the biology that controls human aging. Calico will use that knowledge to devise interventions that enable people to lead longer and healthier lives. Calico's highly innovative technology labs, its commitment to curiosity-driven discovery science and, with academic and industry partners, its vibrant drug-development pipeline, together create an inspiring and exciting place to catalyze and enable medical breakthroughs.

Position Description:

Calico is seeking a Data Scientist/Senior Data Scientist to join the statistical genetics team within the Computational Sciences group. In this position, you will develop and apply cutting-edge computational methods to analyze unique biobank-scale datasets (e.g. UK Biobank) to identify potential drug targets for age-related disease. Two major areas of focus will be analysis of longitudinal datasets to identify factors modulating the trajectory of age-related decline and the analysis of high-dimensional phenotypes. The successful candidate will join a vibrant research community and work closely with internal and external scientific collaborators and will be expected to contribute to the design of target discovery or validation efforts.

Position Responsibilities:

  • Develop and apply computational methods suitable for biobank-scale complex or high-dimensional phenotypic datasets from both public and proprietary data sources
  • Conceive, design, and execute studies to interrogate the genetic basis of age-related complex traits and of aging trajectories in large human cohorts
  • Integrate multiple data sources (e.g. clinical data, genetics, 'omics) to develop therapeutic hypotheses for age-related disease
  • Contribute to software and/or workflows for the analysis of cohort data across multiple research projects and development programs
  • Collaborate with and communicate findings effectively to researchers from a broad range of scientific backgrounds, both internally and externally

Position Requirements:

  • Ph.D. in genetics, statistics, statistical genetics, computational biology, or equivalent
  • Track record of developing and applying new computational and statistical methods tailored to analyzing novel datasets
  • Experience with the statistical genetic toolkit for complex traits (e.g. GWAS, gene burden tests, statistical finemapping, LD score regression, eQTL/pQTL mapping, colocalization, polygenic risk scores, Mendelian randomization), including methods for ancestrally diverse populations
  • Experience with analyzing large, high-dimensional clinical and/or molecular datasets (for example, imaging, genomics and other 'omics, longitudinal data)
  • Familiarity with large human cohort studies (e.g. UK Biobank, FinnGen, All of Us)
  • Strong coding skills in Python and/or R, including experience developing software and/or workflows that can be readily used by others
  • Strong interpersonal, written, and verbal communication skills, including collaborating with stakeholders from different scientific disciplines
  • Must be able to work onsite at least 4 days a week

The estimated base salary range for this role is $120,000 - $185,000. Actual pay will be based on a number of factors including experience and qualifications. This position is also eligible for two annual cash bonuses.

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall