landing_page-logo
  1. Home
  2. »All Job Categories
  3. »Data Entry Jobs

Auto-apply to these data entry jobs

We've scanned millions of jobs. Simply select your favorites, and we can fill out the applications for you.

Data Society logo
Data SocietyDallas, TX
We Are: At Data Society Group, we provide the highest quality, leading-edge, industry-tailored data and AI training and solutions for Fortune 1,000 companies and federal, state, and local governmental organizations. We partner with our clients to educate, equip, and empower their workforce with the skills they need to achieve their goals and expand their impact. Data Society Group publishes CDO Magazine, the preeminent global publication for Data Officers. Our executive boards include industry leaders, engineers, and data scientists from across the world. We are empowering the workforce of the future, from data literacy for all employees to support for data engineers and data scientists to train up on the most complex AI solutions and Machine Learning skills. About the Role Are you passionate about teaching and sharing your real-world experience in data science and AI? Join Data Society as a part-time Data Science and Data Literacy Instructor and help professionals around the world gain the skills they need to transform their careers and industries. As an independent contractor, you’ll enjoy the flexibility to teach subjects you’re passionate about—on a schedule that fits your life—while being supported by a collaborative and communicative team. We maintain an active pool of contract instructors year-round, offering regular teaching opportunities. You’ll lead engaging, interactive sessions for professionals across industries and skill levels, helping them understand how to apply data science, machine learning, and AI in practical, impactful ways. Your role is not just to instruct, but to inspire—creating a positive and supportive learning environment where learners leave empowered and ready to act.We are specifically recruiting for a client engagement on-site in Dallas, TX. If you're in a commutable distance to the Dallas area, we'd love to speak with you! Key Responsibilities Deliver high-quality, engaging instruction to professional audiences through live online or in-person sessions. Prepare thoroughly for each course by reviewing and internalizing curriculum content. Use interactive, hands-on teaching methods to support different learning styles. Collaborate with instructional designers and fellow data scientists to continuously improve course content. Communicate student needs, feedback, and insights to internal teams to enhance the learning experience. Engage with students and client stakeholders to identify potential additional learning needs. What You Bring You’re not only a subject matter expert—you’re a passionate educator. You love simplifying complex topics and helping others build confidence in their abilities. You’re a great communicator, comfortable teaching adult learners in professional settings, including federal agencies and Fortune 500 companies. Minimum Qualifications: 2+ years of professional experience in a data-focused role. Proficiency in at least one of the following areas: R, Python, SQL Data Governance Artificial Intelligence / Machine Learning Cloud Infrastructure (Azure, AWS) Experience teaching or tutoring adult learners (online and/or in-person). Strong presentation and facilitation skills; confident teaching professionals and executives. Powered by JazzHR

Posted 1 day ago

Shuvel Digital logo
Shuvel DigitalLos Angeles, CA
Job Description: The Los Angeles Unified School District seeks qualified proposers to provide Data Science Services on behalf of the Information Technology Services. Responsibilities: Set up R Studio Workbench, Connect, and Package Manager on a Linux server. Manage libraries and versions. Write required security plan. Work with security to connect the server to Snowflake and other data sources. Provision server access to data scientists, including R Studio Server, sftp, and command line. Provide training and support to data scientists, including training on visualization packages like ggplot2, Quatro, and Shiny. Guide data scientists on how to use version control with GIT/Azure Devops. Resource must have at least three (3) years of experience in providing services as listed

Posted 30+ days ago

California Life Company logo
California Life CompanySouth San Francisco, CA
Who We Are: Calico (Calico Life Sciences LLC) is an Alphabet-founded research and development company whose mission is to harness advanced technologies and model systems to increase our understanding of the biology that controls human aging. Calico will use that knowledge to devise interventions that enable people to lead longer and healthier lives. Calico's highly innovative technology labs, its commitment to curiosity-driven discovery science and, with academic and industry partners, its vibrant drug-development pipeline, together create an inspiring and exciting place to catalyze and enable medical breakthroughs. Position Description: Calico is seeking a Data Scientist/Senior Data Scientist to join the statistical genetics team within the Computational Sciences group. In this position, you will develop and apply cutting-edge computational methods to analyze unique biobank-scale datasets (e.g. UK Biobank) to identify potential drug targets for age-related disease. Two major areas of focus will be analysis of longitudinal datasets to identify factors modulating the trajectory of age-related decline and the analysis of high-dimensional phenotypes. The successful candidate will join a vibrant research community and work closely with internal and external scientific collaborators and will be expected to contribute to the design of target discovery or validation efforts. Position Responsibilities: Develop and apply computational methods suitable for biobank-scale complex or high-dimensional phenotypic datasets from both public and proprietary data sources Conceive, design, and execute studies to interrogate the genetic basis of age-related complex traits and of aging trajectories in large human cohorts Integrate multiple data sources (e.g. clinical data, genetics, 'omics) to develop therapeutic hypotheses for age-related disease Contribute to software and/or workflows for the analysis of cohort data across multiple research projects and development programs Collaborate with and communicate findings effectively to researchers from a broad range of scientific backgrounds, both internally and externally Position Requirements: Ph.D. in genetics, statistics, statistical genetics, computational biology, or equivalent Track record of developing and applying new computational and statistical methods tailored to analyzing novel datasets Experience with the statistical genetic toolkit for complex traits (e.g. GWAS, gene burden tests, statistical finemapping, LD score regression, eQTL/pQTL mapping, colocalization, polygenic risk scores, Mendelian randomization), including methods for ancestrally diverse populations Experience with analyzing large, high-dimensional clinical and/or molecular datasets (for example, imaging, genomics and other 'omics, longitudinal data) Familiarity with large human cohort studies (e.g. UK Biobank, FinnGen, All of Us) Strong coding skills in Python and/or R, including experience developing software and/or workflows that can be readily used by others Strong interpersonal, written, and verbal communication skills, including collaborating with stakeholders from different scientific disciplines Must be able to work onsite at least 4 days a week The estimated base salary range for this role is $120,000 - $185,000. Actual pay will be based on a number of factors including experience and qualifications. This position is also eligible for two annual cash bonuses.

Posted 2 days ago

Parts Town logo
Parts TownAddison, IL
Position at Parts Town See What We're All About As the fastest-growing distributor of restaurant equipment, HVAC and residential appliance parts, we like to do things a little differently. First, you need to understand and demonstrate our Core Values with safety being your first priority. That's key. But we're also looking for unique enthusiasm, high integrity, courage to embrace change…and if you know a few jokes, that puts you on the top of our list! Do you have a genius-level knowledge of original equipment manufacturer parts? If not, no problem! We're more interested in passionate people with fresh ideas from different backgrounds. That's what keeps us at the top of our game. We're proud that our workplace has been recognized for its growth and innovation on the Inc. 5000 list 15 years in a row and the Crain's Fast 50 list ten times. We are honored to be voted by our Chicagoland team as a Chicago Tribune Top Workplace for the last four years. If you're ready to roll up your sleeves, go above and beyond and put your ambition to work, all while having some fun, let's chat - Apply Today! Perks Parts Town Pride - check out our virtual tour and culture! Quarterly profit-sharing bonus Hybrid work schedule Team member appreciation events and recognition programs Volunteer opportunities Monthly IT stipend Casual dress code On-demand pay options: Access your pay as you earn it, to cover unexpected or even everyday expenses All the traditional benefits like health insurance, 401k/401k match, employee assistance programs and time away - don't worry, we've got you covered. The Job at a Glance The Data Engineer- Data Products & Delivery will specialize in turning raw data into business-ready products within GCP. They will design and optimize data models, marts, and semantic layers in BigQuery, enabling analytics, BI, and ML use cases across the enterprise. You will also support downstream systems and APIs that deliver trusted data for operational and AI-driven processes. You will play a foundational role in shaping this future - building the pipelines, products, and platforms that power the next generation of digital distribution. A Typical Day Build silver/gold layers in BigQuery, transforming raw data into clean, business-ready models. Design semantic layers using Looker or dbt for consistent business metrics Develop data marts and star schemas optimized for analytics and self-service BI Build APIs and services for data delivery (Cloud Functions, Cloud Run) Partner with analysts, data scientists, and ML engineers to ensure data & AI readiness and support advanced modeling Collaborate with the Data Governance team to embed stewardship, lineage, and metadata into Dataplex and other MDM tooling Support real-time analytics using BigQuery streaming and Pub/Sub as needed. Optimize query performance and cost efficiency in BigQuery Drive adoption of AI/automation by ensuring data models are accessible for predictive and agentic use cases To Land This Opportunity You have 4+ years of experience in data engineering or BI-focused data modeling You have hands-on expertise in BigQuery (partitioning, clustering, performance tuning, cost management) You have strong knowledge of dbt, Looker, and SQL for transformation and semantic modeling You obtain experience with Cloud Functions, Cloud Run, and APIs for data delivery You're familiar with Pub/Sub and BigQuery streaming for real-time use cases You have exposure to ML feature engineering in Vertex AI or similar platforms a plus You have a strong understanding of data governance frameworks, Dataplex, and metadata management You're an all-star communicator and are proficient in English (both written and verbal) You have a quality, high speed internet connection at home About Your Future Team Our IT team's favorite pastimes include corny jokes, bowling, pool, and good pizza. They like vehicles that go really fast, Harry Potter, and coffee…a lot (they'll hear you out on whether Dunkin or Starbucks gets your vote). At Parts Town, we value transparency and are committed to ensuring our team members feel appreciated and supported. We prioritize our positive workplace culture where collaboration, growth, and work-life balance are celebrated. The salary range for this role is $114,300 - 132,715 which is based on including but not limited to qualifications, experience, and geographical location. Parts Town is a pay for performance-company. In addition to base pay, some roles offer a profit-sharing program, and an annual bonus depending on the role. Our comprehensive benefits package includes health, dental and vision insurance, 401(k) with match, employee assistance programs, paid time off, paid sick time off, paid holidays, paid parental leave, and professional development opportunities. Parts Town welcomes diversity and as an equal opportunity employer all qualified applicants will be considered regardless of race, religion, color, national origin, sex, age, sexual orientation, gender identity, disability or protected veteran status.

Posted 2 days ago

Johnson & Johnson logo
Johnson & JohnsonCambridge, MA
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at https://www.jnj.com Job Function: Data Analytics & Computational Sciences Job Sub Function: Data Science Job Category: Scientific/Technology All Job Posting Locations: Cambridge, Massachusetts, United States of America, Spring House, Pennsylvania, United States of America, Titusville, New Jersey, United States of America Job Description: Johnson & Johnson Innovative Medicine is recruiting for a Knowledge Graph Engineer, R&D Data Science & Digital Health- Data Strategy and Products. The primary location is Barcelona or Madrid, Spain but is also open to Titusville, NJ; Spring House, PA; or Cambridge, MA. Our expertise in Innovative Medicine is informed and inspired by patients, whose insights fuel our science-based advancements. Engineers like you work on teams that save lives by developing the medicines of tomorrow. Join us in developing treatments, finding cures, and pioneering the path from lab to life while championing patients every step of the way. Learn more at https://www.jnj.com/innovative-medicine Job Responsibilities We are committed to using innovative technology to improve healthcare outcomes worldwide. As part of this mission, we are seeking a Knowledge Graph Engineer to join our Data Strategy and Products team to standardize and connect biomedical and clinical data. You will be a hands-on technical contributor with depth in semantic technologies, ontology, and graph data modeling, plus strong familiarity with the life sciences domain. You will connect enterprise master data with R&D data across the entire product lifecycle so trusted, interoperable knowledge powers analytics, search, and AI across Johnson and Johnson Innovative Medicine. Contribute to the design and implementation of a scalable knowledge graph infrastructure focused on data standardization and interoperability. Curate and extend ontologies for clear mapping into established biomedical ontologies and controlled terminologies using RDF standards. Apply graph-based data modeling for efficient organization, integration and retrieval to ensure system flexibility and long-term maintainability. Stand up SPARQL/GraphQL/REST services; develop ingestion and curation pipelines to ingest, normalize and map concepts across data sources. Extend and curate ontologies (e.g., diseases, drugs, targets, pathways, etc.) and maintain synonyms, cross-references, and provenance. Partner with cross-functional teams to enable NLP/RAG over graphs, features for predictive modeling and terminology services for search and study design tools. Work with IT and DevOps teams to deploy and manage the graph database infrastructure, focusing on high availability, scalability, and recovery operations. Create and be responsible for documentation, such as data dictionaries, data lineage, and data flow diagrams, to facilitate understanding of the knowledge graph. Job Qualifications Desired Ph.D. or master's degree in bioengineering, computer science, IT, bioinformatics, physics, mathematics, or related fields, emphasis on semantic technologies and biomedical application. At least 5 years professional experience in health informatics, or at least 7 years of professional experience or with additional consideration for candidates with graduate degrees or equivalent experience. Programming background in parser combinators, natural language processing, and linked data (RDF Triple Stores and property graphs). Demonstrated experience in large-scale knowledge graphs construction, ontology development, pharmaceutical or healthcare domains integration. Proficiency in semantic web technologies (SPARQL, RDF, OWL), familiarity with graph databases (Neo4j, Amazon Neptune). Proven work with complex biomedical datasets, including genomics, proteomics, and high-throughput screening data. Impressive records in a pharmaceutical, biotech, or related research environment are preferred. Proficiency in various data storage solutions (SQL, key-value, column, document, graph stores) and data modeling techniques (semantic data, ontologies, taxonomies). Experience in CI/CD implementations, git usage, CI/CD stacks (Jenkins, GitLab, Azure DevOps), DevOps tools, metrics/monitoring, and containerization technologies (Docker, Singularity). Strong skills in analysis, problem-solving, organizational change, project delivery, and managing external vendors. Demonstrated agile decision-making, performance management, continuous learning, and commitment to quality. Ability to multi-task, prioritize work, exhibit organizational skills and flexibility to deliver maximum business value. Capacity to translate discussions into user requirements and project plans. Willingness to travel less than 25% to conferences and internal meetings. #JRDDS The anticipated base pay range for this position is : 146.200 USD - 197.800 USD Additional Description for Pay Transparency: Subject to the terms of their respective plans, employees and/or eligible dependents are eligible to participate in the following Company sponsored employee benefit programs: medical, dental, vision, life insurance, short- and long-term disability, business accident insurance, and group legal insurance. Subject to the terms of their respective plans, employees are eligible to participate in the Company's consolidated retirement plan (pension) and savings plan (401(k)). This position is eligible to participate in the Company's long-term incentive program. Subject to the terms of their respective policies and date of hire, Employees are eligible for the following time off benefits: Vacation- 120 hours per calendar year Sick time- 40 hours per calendar year; for employees who reside in the State of Washington- 56 hours per calendar year Holiday pay, including Floating Holidays- 13 days per calendar year Work, Personal and Family Time - up to 40 hours per calendar year Parental Leave- 480 hours within one year of the birth/adoption/foster care of a child Condolence Leave- 30 days for an immediate family member: 5 days for an extended family member Caregiver Leave- 10 days Volunteer Leave- 4 days Military Spouse Time-Off- 80 hours Additional information can be found through the link below. https://www.careers.jnj.com/employee-benefits

Posted 3 days ago

Johnson & Johnson logo
Johnson & JohnsonCambridge, MA
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at https://www.jnj.com Job Function: Data Analytics & Computational Sciences Job Sub Function: Data Science Job Category: People Leader All Job Posting Locations: Cambridge, Massachusetts, United States of America, Titusville, New Jersey, United States of America Job Description: Johnson and Johnson Innovative Medicine (J&J IM), a pharmaceutical company of Johnson & Johnson is recruiting for a Director, R&D Data Science, Data Products- Global Development. This position has a primary location of Titusville, NJ but is also open to Spring House, PA or Cambridge, MA. This position requires up to 25% travel. About Innovative Medicine Our expertise in Innovative Medicine is informed and inspired by patients, whose insights fuel our science-based advancements. Visionaries like you work on teams that save lives by developing the medicines of tomorrow. Join us in developing treatments, finding cures, and pioneering the path from lab to life while championing patients every step of the way. Learn more at https://www.jnj.com/innovative-medicine Position Summary The Director, Data Products- Global Development is responsible for leading the strategy, design, and delivery of user-centric, reusable data products that enable advanced analytics, data science, and AI/ML solutions across the Global Development organization. This role ensures that high-value internal and external development data assets-including protocol, study design, operational, regulatory, disclosure, and real-world data (RWD/RWE)-are standardized, productized, and integrated into Janssen's enterprise R&D data ecosystem. Working closely with Global Development Data Science teams, Clinical Development, Operations, and Regulatory stakeholders, as well as the broader Data Strategy & Products organization, this leader ensures that Global Development data products are AI-ready, semantically consistent, and interoperable, enabling trial efficiency, regulatory readiness, and improved compliance outcomes. Key Responsibilities: Data Product Strategy & Execution Lead a team to define and deliver data products addressing critical Global Development use cases (e.g., study startup, TMF metadata, operational KPIs, submission readiness). Define and maintain a roadmap for Global Development data products, spanning protocol development through clinical trial disclosure. Develop data products through agile delivery and FAIR data principles, ensuring scalability, interoperability, and reuse across Janssen functions and external partners. Integrate internal and external Global Development data sources, including CRO-provided datasets, performance benchmarks, and regulatory datasets. Ontology & Semantic Modeling Contribute to the development and governance of a Global Development Ontology to enable semantic consistency across operational and regulatory domains. Align Global Development Ontology with enterprise metadata standards and external frameworks (e.g., CDISC, HL7 FHIR,etc). Collaboration & Integration Partner with Knowledge Management, Data Product Architecture & Governance, and Master Data Management teams to ensure Global Development data products integrate seamlessly into enterprise ontologies, knowledge graphs, and catalogs. Collaborate with Global Development stakeholders-including Clinical, Regulatory, Medical Writing, QA, and Safety-to co-create solutions that maximize the value of development data assets. Interface with regulatory and compliance teams to ensure data products meet submission, disclosure, and transparency requirements. Team & Operating Model Leadership Lead a cross-functional team of product owners, data engineers, and Global Development domain experts. Establish governance models, agile delivery processes, and value-tracking metrics for Global Development data products. Value Realization & Communication Define and track KPIs to measure the impact of data products on study startup times, operational insights, and regulatory compliance. Communicate product vision, roadmap, and value realization to Global Development leadership, Data Science teams, and executive stakeholders. Champion data literacy and adoption within the Global Development organization. Strategic Impact The Director, Data Products- Global Development ensures Global Development data is transformed into trusted, AI-ready products that accelerate study startup, improve operational insights, and strengthen regulatory readiness. By embedding semantic rigor and aligning with enterprise data strategy, this role positions Global Development data as a strategic asset that enables trial efficiency, compliance, and innovation across Janssen R&D. Qualifications Education PhD or Master's in informatics, computer science, life sciences, or related discipline. 8+ years of experience in pharma/biotech R&D with focus on clinical development, operations, regulatory, or data product management. Strong track record in data product development, integration, or semantic modeling in a regulated domain. Experience working with multi-modal development data, including clinical trial, protocol, operational, submission, and disclosure datasets. Skills & Expertise Deep knowledge of Global Development processes, clinical trial design, and regulatory requirements. Deep knowledge of data products, database design, data transformation/mapping. Familiarity with metadata management, ontologies, knowledge graphs, and industry standards (e.g., CDISC, FHIR, ICH). Strong leadership, collaboration, and communication skills, with ability to translate technical strategy into business/science value. Demonstrated ability to influence stakeholders and drive adoption of new data capabilities across a complex organization. Johnson & Johnson is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state or local law. We actively seek qualified candidates who are protected veterans and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act. Johnson & Johnson is committed to providing an interview process that is inclusive of our applicants' needs. If you are an individual with a disability and would like to request an accommodation, external applicants please contact us via https://www.jnj.com/contact-us/careers , internal employees contact AskGS to be directed to your accommodation resource. #LI-Hybrid #JRDDS #JNJDataScience #JNJIMRND-DS The anticipated base pay range for this position is : The anticipated base pay range for this position is $161,000 to $276,000 USD Additional Description for Pay Transparency: Subject to the terms of their respective plans, employees and/or eligible dependents are eligible to participate in the following Company sponsored employee benefit programs: medical, dental, vision, life insurance, short- and long-term disability, business accident insurance, and group legal insurance. Subject to the terms of their respective plans, employees are eligible to participate in the Company's consolidated retirement plan (pension) and savings plan (401(k)). This position is eligible to participate in the Company's long-term incentive program. Subject to the terms of their respective policies and date of hire, Employees are eligible for the following time off benefits: Vacation- 120 hours per calendar year Sick time- 40 hours per calendar year; for employees who reside in the State of Washington- 56 hours per calendar year Holiday pay, including Floating Holidays- 13 days per calendar year Work, Personal and Family Time - up to 40 hours per calendar year Parental Leave- 480 hours within one year of the birth/adoption/foster care of a child Condolence Leave- 30 days for an immediate family member: 5 days for an extended family member Caregiver Leave- 10 days Volunteer Leave- 4 days Military Spouse Time-Off- 80 hours Additional information can be found through the link below. https://www.careers.jnj.com/employee-benefits

Posted 3 days ago

N logo
New York Times CompanyNew York, NY
The mission of The New York Times is to seek the truth and help people understand the world. That means independent journalism is at the heart of all we do as a company. It’s why we have a world-renowned newsroom that sends journalists to report on the ground from nearly 160 countries. It’s why we focus deeply on how our readers will experience our journalism, from print to audio to a world-class digital and app destination. And it’s why our business strategy centers on making journalism so good that it’s worth paying for.  About the Role The New York Times is looking for a senior data engineer to join the Customer-Facing Data Products team to develop real-time data pipelines and APIs that process events and serve aggregated data for customer-facing use cases. You will report to the Engineering Manager for the Customer-Facing Data Products team and build widely reusable solutions to help partner teams solve our most important real-time needs, including behavioral and targeting use cases. This is a hybrid role based in our New York City headquarters. Responsibilities Develop real-time data pipelines using Apache Kafka, Apache Flink, and other streaming technologies. Ingest and organize structured and unstructured data for widespread reuse across patterns. Implement mechanisms to ensure data quality, observability and governance best practices. Collaborate with software engineers and infrastructure teams to improve pipeline performance and integrate solutions into production environments. Stay current with latest technologies, keeping up with the latest advancements in streaming data processing and related technologies. Grow the skills of colleagues by providing clear technical feedback through pairing, design, and code review. Experience collaborating with product and partners to meet shared goals. Demonstrate support and understanding of our value of journalistic independence and a strong commitment to our mission to seek the truth and help people understand the world. Basic Qualifications: 5+ years of full-time data engineering experience shipping real-time solutions with event-driven architectures and stream-processing frameworks. Experience with AWS and their service offerings and tools. Understanding of modern API design principles and technologies, including REST, GraphQL, and gRPC for data serving. Programming fluency with Python. Experience using version control and CI/CD tools, such as Github and Drone. Preferred Qualifications: Experience developing pipelines with Apache Kafka, Apache Flink, or Spark Streaming. Experience with SQL and building APIs with GoLang and Protobuf. Understanding of cloud-native data platform technologies including data lakehouse and medallion architectures. REQ-018499 The annual base pay range for this role is between: $140,000 — $155,000 USD   The New York Times Company is committed to being the world’s best source of independent, reliable and quality journalism. To do so, we embrace a diverse workforce that has a broad range of backgrounds and experiences across our ranks, at all levels of the organization. We encourage people from all  backgrounds to apply. We are  an Equal Opportunity Employer and do not discriminate on the basis of an individual's sex, age, race, color, creed, national origin, alienage, religion, marital status, pregnancy, sexual orientation or affectional preference, gender identity and expression, disability, genetic trait or predisposition, carrier status, citizenship, veteran or military status and other personal characteristics protected by law. All applications will receive consideration for employment without regard to legally protected characteristics.  The U.S. Equal Employment Opportunity Commission (EEOC)’s Know Your Rights Poster is available here .  The New York Times Company will provide reasonable accommodations as required by applicable federal, state, and/or local laws. Individuals seeking an accommodation for the application or interview process should email reasonable.accommodations@nytimes.com. Emails sent for unrelated issues, such as following up on an application, will not receive a response. The Company will further consider qualified applicants, including those with criminal histories, in a manner consistent with the requirements of applicable "Fair Chance" laws.  For information about The New York Times' privacy practices for job applicants click  here . Please beware of fraudulent job postings. Scammers may post fraudulent job opportunities, and they may even make fraudulent employment offers. This is done by bad actors to collect personal information and money from victims. All legitimate job opportunities from The New York Times will be accessible through The New York Times careers site . The New York Times will not ask job applicants for financial information or for payment, and will not refer you to a third party to do so. You should never send money to anyone who suggests they can provide employment with The New York Times. If you see a fake or fraudulent job posting, or if you suspect you have received a fraudulent offer, you can report it to The New York Times at NYTapplicants@nytimes.com. You can also file a report with the Federal Trade Commission or your state attorney general .  

Posted 30+ days ago

C3 AI logo
C3 AITysons, VA
C3 AI (NYSE: AI), is the Enterprise AI application software company. C3 AI delivers a family of fully integrated products including the C3 Agentic AI Platform, an end-to-end platform for developing, deploying, and operating enterprise AI applications, C3 AI applications, a portfolio of industry-specific SaaS enterprise AI applications that enable the digital transformation of organizations globally, and C3 Generative AI, a suite of domain-specific generative AI offerings for the enterprise. Learn more at: C3 AI As a Data Scientist / Senior Data Scientist – Federal Optimization, you will partner with some of the largest and most mission-critical organizations in the world to design and deliver the next generation of AI-powered enterprise applications. Our team focuses on developing scalable, explainable optimization models and algorithms tailored to federal use cases across domains such as logistics, operations, resource planning, and more. You’ll work cross-functionally with data scientists, engineers, subject matter experts, and federal stakeholders to deliver full-lifecycle solutions: from translating client input into soft and hard constraints, to deploying robust, production-grade optimization tools on the C3 AI Suite. Qualified candidates should possess deep expertise in operations research and optimization. This role requires US Citizenship. Responsibilities: Research, design, implement, and deploy optimization solutions for enterprise applications leveraging the C3 AI Suite. Transform client requirements into mathematical formulations. Partner with cross-functional teams to translate optimization model insights into actionable strategies and measurable outcomes. Assist and enable federal customers to build their own optimization applications on the C3 AI Suite. Develop, maintain, and enhance optimization frameworks, libraries, and tools to ensure scalability and efficiency while contributing to the continuous improvement of the C3 AI Suite. Stay informed on state-of-the-art optimization techniques, promote best practices, and foster an innovative and collaborative work environment at C3 AI. Qualifications: U.S. Citizenship (and willingness to obtain a security clearance). Bachelor’s in computer science, Electrical Engineering, Statistics, Operations Research or equivalent fields. Strong foundation in optimization techniques (e.g., LP, MILP, MINLP) and solvers. Strong mathematical foundation (linear algebra, calculus, statistics, probability). Proficiency in Python and experience with mathematical programming libraries. Excellent communication skills and ability to work independently or in teams. Motivated, curious, and eager to learn about federal mission domains. Preferred Qualifications: MS or PhD in Operations Research, Applied Mathematics, Computer Science, Industrial Engineering, or a related field. Active TS/SCI with CI or Full-Scope Polygraph. Professional experience applying optimization in federal or customer-facing environments. Familiarity with commercial solvers (e.g., Gurobi), Git, and GenAI tools. Understanding of machine learning, deep learning, or reinforcement learning. Portfolio of relevant projects or publications. C3 AI provides a competitive compensation package and excellent benefits. Candidates must be authorized to work in the United States without the need for current or future company sponsorship. C3 AI is proud to be an Equal Opportunity and Affirmative Action Employer. We do not discriminate on the basis of any legally protected characteristics, including disabled and veteran status. 

Posted 30+ days ago

C3 AI logo
C3 AINew York City, NY
C3 AI (NYSE: AI), is the Enterprise AI application software company. C3 AI delivers a family of fully integrated products including the C3 Agentic AI Platform, an end-to-end platform for developing, deploying, and operating enterprise AI applications, C3 AI applications, a portfolio of industry-specific SaaS enterprise AI applications that enable the digital transformation of organizations globally, and C3 Generative AI, a suite of domain-specific generative AI offerings for the enterprise. Learn more at: C3 AI As a member of the C3 AI Data Science team, you will work with some of the largest companies on the planet helping them build the next generation of AI-powered enterprise applications on the  C3 AI Suite . You will work directly with data scientists, software engineers, and subject matter experts in the definition of new analytics capabilities able to provide our customers with the information they need to make proper decisions and enable their digital transformation. You will help find the appropriate machine learning / data mining algorithms to answer those questions and implement those on the C3 AI Suite so they can run at scale. C3 AI Data Scientists are equipped with modern development tools, IDEs, and AI agents to maximize productivity and accelerate solution delivery.   Qualified candidates will have an in-depth knowledge of most common machine learning techniques and their application. You will also understand the limitations of these algorithms and how to tweak them or derive from them to achieve similar results at large-scale. Note: This is a client-facing position which requires travel. Candidates should have the ability and willingness to travel based on business needs. Responsibilities: Driving adoption of Deep Learning systems into next generation of C3 AI products. Designing and deploying Machine Learning algorithms for industrial applications such as fraud detection and predictive maintenance. Collaborating with data and subject matter experts from C3 AI and its customer teams to seek, understand, validate, interpret, and correctly use new data elements. Qualifications: MS or PhD in Computer Science, Electrical Engineering, Statistics, or equivalent fields. Applied Machine Learning experience (regression and classification, supervised, and unsupervised learning).  Experience with prototyping languages such as Python and R. Strong mathematical background (linear algebra, calculus, probability, and statistics). Experience with scalable ML (MapReduce, streaming). Ability to drive a project and work both independently and in a team. Smart, motivated, can-do attitude, and seeks to make a difference. Excellent verbal and written communication. Ability to travel as needed  Preferred Qualifications: Experience with JavaScript and Java. Experience with time series and dynamical systems. A portfolio of projects (GitHub, papers, etc.). Experience working with modern IDEs and AI agent tools  as part of accelerated development workflows. C3 AI provides excellent benefits, a competitive compensation package and generous equity plan.  New York Base Pay Range $123,000 — $185,000 USD C3 AI is proud to be an Equal Opportunity and Affirmative Action Employer. We do not discriminate on the basis of any legally protected characteristics, including disabled and veteran status. 

Posted 30+ days ago

Current logo
CurrentNew York, NY
Senior Data Analyst At Current, we’re on a mission to enable our members to create better financial outcomes for themselves. Headquartered in NYC, we’re a leading U.S. fintech and one of the fastest growing companies with over million members. No matter your title, we welcome everyone to build great products, grow quickly, and make an impact together. ABOUT THE ROLE We are looking for a Senior Data Analyst with experience in B2C/DTC Member Experience or Customer Success. In this role, you will use data to drive strategic decisions, improve business performance, and enhance our customer interactions to create a seamless user experience. You will use advanced analytics to find trends, evaluate performance, and deliver strategic insights that facilitate data-driven decision-making within the Member Experience team and across the organization. The ideal candidate has a deep understanding of customer behavior, a passion for data-driven decision-making, and a proven ability to translate insights into actionable strategies. RESPONSIBILITIES Analyze member experience data to identify trends, pain points, and opportunities for operational efficiency. Develop and maintain data pipelines, dashboards and reports to track key performance indicators (KPIs) that tie customer success effectiveness, member satisfaction, and engagement to business bottomline metrics. Utilize internal and third-party data to conduct in-depth analyses, identify gaps, mitigate friction and pain points, and optimize customer interaction across channels. Partner with cross-functional teams, including Product, Engineering, and Customer Success, and leverage such techniques as A/B testing and customer segmentation to enhance personalization and support strategies. Provide actionable insights to leadership on customer outreach behavior, support efficiency and satisfaction, and areas for innovation. Ensure data integrity, accuracy, and governance in analytics pipelines across multiple customer interaction channels. Participate in on-call rotation to support production analytics data pipelines. Mentor junior analysts and contribute to a data-driven culture within the team. ABOUT YOU 5+ years of experience in data analytics, preferably in fintech, banking, or a customer-centric industry. Strong proficiency in SQL, Python, or R for data analysis and manipulation. Experience building reports and dashboards using data visualization tools such as Tableau, Looker, or Power BI. Expertise in customer support analytics, customer satisfaction score (CSAT) analysis, and operational efficiency metrics. Experience in A/B testing, customer segmentation, and deep dive analysis in the context of informing business strategies from user behaviors. Excellent problem-solving skills and the ability to translate complex data into actionable insights. Strong communication skills to effectively collaborate with technical and non-technical stakeholders. Strong project ownership and the ability to drive requirement gathering and prioritization. Experience with CRM systems (e.g., Zendesk) and customer feedback tools is a plus. Experience within the startup space is advantageous. Additional Skills and Qualifications Experience with Looker or other business intelligence tools is preferred. Exposure to dimensional modeling concepts to power meaningful analytics. Ability to visualize data effectively and communicate insights clearly and concisely. Experience modeling data using dbt. Familiarity with machine learning techniques and predictive analytics is a plus. Experience working with Fintech, banks or a B2C company. This role has a base salary range of $150,000 - $200,000. Compensation is determined based on experience, skill level, and qualifications, which are assessed during the interview process. Current offers a competitive total rewards package which includes base salary, equity, and comprehensive benefits. BENEFITS Competitive salary Stock options 401(k) savings plan Discretionary performance bonus program Biannual performance reviews Medical, Dental and Vision premiums covered at 100% for you and your dependents Unlimited time off and paid holidays Generous parental leave policy Commuter benefits Healthcare and Dependent care FSA benefit Employee Assistance Programs focused on mental health Healthcare advocacy program for all employees Access to mental health apps Team building activities Our modern Chelsea-based office with open floor plan, stocked kitchen, and catered lunches

Posted 30+ days ago

I logo
iSoftTek Solutions IncAuburn Hills, MI
Job Title: Business Analyst (Data Governance and Data Quality Management) Location: Auburn Hills, MI.  Must be local or relocation (out of pocket from day 1).  No exceptions.  HEAVY preference for locals in this group.  NO West Coast candidates (PST) will be considered Duration:  Long-term    Job Type: Contract – W2 Work Type: Hybrid/Onsite Hybrid / on-site 2 days a week in Auburn Hills, MI NOTE: Highlighted Job duties ARE the  REQUIRED SKILLS , please make sure that they have details around these types of duties clearly outlined in their resumes.  Thank you. Location:  Auburn Hills, MI (at least two days on site per week) Business Analyst (Data Governance and Data Quality Management) · Partner with Data Stewards and Information Architects in support of Data Governance and Data Quality Management activities  for the Chief Data & Analytics Office.  · Collaborate with business and technical stakeholders to query/research data sources, and analyze data for various initiatives, include regulatory and operational reporting.  · Responsible for  researching, gathering, and documenting data, metadata, and reference data in our Collibra Data Catalog  to ensure it  aligns with regulatory expectations. · Meet with various stakeholders to gather information, facilitate meetings, conduct research, and document information . · Financial Services / Banking experience preferred .     Kindly please share your resumes to ‪ srikar@isofttekinc.com or ‪(707) 435-3471

Posted 30+ days ago

Guidehouse logo
GuidehouseSan Antonio, Texas
Job Family : Data Science Consulting Travel Required : Up to 25% Clearance Required : Ability to Obtain Public Trust What You Will Do : We are seeking an experienced Data Engineer to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical contributor, responsible for designing and implementing scalable data pipelines and interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation. This is a strategic delivery role for someone who thrives at the intersection of data engineering, cloud platforms, and public sector analytics. Client Leadership & Delivery Collaborate with FCA clients to understand data architecture and reporting needs. Lead the development of ETL pipelines and dashboard integrations using Databricks and Tableau. Ensure delivery excellence and measurable outcomes across data migration and visualization efforts. Solution Development & Innovation Design and implement scalable ETL/ELT pipelines using Spark, SQL, and Python. Develop and optimize Tableau dashboards aligned with federal reporting standards. Apply AI/ML tools to automate metadata extraction, clustering, and dashboard scripting. Practice & Team Leadership Work closely with data architects, analysts, and cloud engineers to deliver integrated solutions. Support documentation, testing, and deployment of data products. Mentor junior developers and contribute to reusable frameworks and accelerators. What You Will Need : US Citizenship is required Bachelor’s degree is required Minimum TWO (2) years of experience in data engineering and dashboard development Proven experience with Databricks, Tableau, and cloud platforms (AWS, Azure) Strong proficiency in SQL, Python, and Spark Experience building ETL pipelines and integrating data sources into reporting platforms Familiarity with data governance, metadata, and compliance frameworks Excellent communication, facilitation, and stakeholder engagement skills What Would Be Nice To Have : AI/LLM Certifications Experience working with FCA clients such as DOT, GSA, USDA, or similar Familiarity with federal contracting and procurement processes What We Offer : Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com . Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com . Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.

Posted today

PricewaterhouseCoopers logo
PricewaterhouseCoopersRosemont, Illinois
Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Intern/Trainee Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals.In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you will have the chance to work on a variety of assignments, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are encouraged to ask questions, take initiative, and produce quality work that adds value for our clients and contributes to our team’s success. During your time at the Firm, you start to establish your personal brand, paving the way to more opportunities. Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. The OpportunityAs a Cloud Engineering, Data & Analytics - Data Science Intern, you will engage in a dynamic environment where you will support teams in delivering data-driven insights and solutions. This role offers a unique chance to immerse yourself in the world of data science, working alongside seasoned professionals to analyze complex datasets and contribute to impactful projects. As an Intern, you will focus on learning and gaining exposure to PwC's practices, supporting teams with basic tasks, and participating in projects that enhance your understanding of data analytics and its applications.In this role at PwC, you will have the opportunity to develop your skills in data modeling, financial data mining, and machine learning, while also gaining hands-on experience with tools like Python and statistical analysis software. You will be encouraged to ask questions, take initiative, and produce quality work that adds value to our clients and contributes to our team's success. This is an exciting opportunity to start building your personal brand and pave the way for future opportunities in the field of data science.Responsibilities- Supporting teams in data analysis and analytics projects to gain exposure to industry practices- Conducting research and gathering information to assist in the development of data-driven insights- Participating in algorithm development and machine learning initiatives to enhance predictive analytics capabilities- Utilizing tools such as Alteryx, Power BI, and Tableau for data visualization and complex data analysis- Assisting in data validation and data security processes to maintain data integrity- Engaging in customer analysis and business data analytics to inform strategic decision-making- Applying Python programming skills to develop and refine data models and statistical analysis- Collaborating with team members to create dashboards and reports that communicate key performance indicators and trendsWhat You Must Have- Currently pursuing or have completed a Bachelor's degree- Client service intern positions are entry-level roles intended for job seekers who are in their third year of a four-year degree program or fourth year of a five-year program at the time of application. Winter internships typically occur during the spring semester preceding the student's final year of school Summer internships typically take place during the summer preceding the student's final year of schoolWhat Sets You Apart- Preference for one of the following field(s) of study: Management Information Systems, Information Technology, Computer Science, Data Analytics, Data Science, Statistics, Mathematics- Preference for a 3.3 overall GPA- Demonstrating proficiency in Python for data analysis- Utilizing machine learning techniques for predictive analytics- Applying data visualization skills with Tableau and Power BI- Conducting complex data analysis to derive actionable insights- Developing algorithms to enhance data processing efficiency- Engaging in analytic research to support data-driven decisions-Leveraging AI to create efficiencies, innovate ways of working and deliver distinctive outcomes Travel Requirements Up to 80% Job Posting End Date Learn more about how we work: https://pwc.to/how-we-workPwC does not intend to hire experienced or entry level job seekers who will need, now or in the future, PwC sponsorship through the H-1B lottery, except as set forth within the following policy: https://pwc.to/H-1B-Lottery-Policy.As PwC is an equal opportunity employer, all qualified applicants will receive consideration for employment at PwC without regard to race; color; religion; national origin; sex (including pregnancy, sexual orientation, and gender identity); age; disability; genetic information (including family medical history); veteran, marital, or citizenship status; or, any other status protected by law. For only those qualified applicants that are impacted by the Los Angeles County Fair Chance Ordinance for Employers, the Los Angeles' Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, San Diego County Fair Chance Ordinance, and the California Fair Chance Act, where applicable, arrest or conviction records will be considered for Employment in accordance with these laws. At PwC, we recognize that conviction records may have a direct, adverse, and negative relationship to responsibilities such as accessing sensitive company or customer information, handling proprietary assets, or collaborating closely with team members. We evaluate these factors thoughtfully to establish a secure and trusted workplace for all.The salary range for this position is: $29.25 - $48.00. For roles that are based in Maryland, this is the listed salary range for this position. Actual compensation within the range will be dependent upon the individual's skills, experience, qualifications and location, and applicable employment laws. PwC offers a wide range of benefits, including medical, dental, vision, 401k, holiday pay, vacation, personal and family sick leave, and more. To view our benefits at a glance, please visit the following link: https://pwc.to/benefits-at-a-glance

Posted today

Applied Intuition logo
Applied IntuitionMountain View, CA
About Applied Intuition Applied Intuition is the vehicle intelligence company that accelerates the global adoption of safe, AI-driven machines. Founded in 2017, Applied Intuition delivers the toolchain, Vehicle OS, and autonomy stacks to help customers build intelligent vehicles and shorten time to market. Eighteen of the top 20 global automakers and major programs across the Department of Defense trust Applied Intuition's solutions to deliver vehicle intelligence. Applied Intuition services the automotive, defense, trucking, construction, mining, and agriculture industries and is headquartered in Mountain View, CA, with offices in Washington, D.C., San Diego, CA, Ft. Walton Beach, FL, Ann Arbor, MI, London, Stuttgart, Munich, Stockholm, Seoul, and Tokyo. Learn more at appliedintuition.com. We are an in-office company, and our expectation is that employees primarily work from their Applied Intuition office 5 days a week. However, we also recognize the importance of flexibility and trust our employees to manage their schedules responsibly. This may include occasional remote work, starting the day with morning meetings from home before heading to the office, or leaving earlier when needed to accommodate family commitments. (Note: For EpiSci job openings, fully remote work will be considered by exception.) About the role We are looking for a Staff Data Platform Engineer to shape the strategy, architecture, and execution of our next-generation data ecosystem. In this role, you will partner closely with our autonomy stack teams-the "customers" of our platform-to deeply understand their workflows, pain points, and evolving needs. You will lead the design and development of robust, scalable, and data-intensive distributed systems that power the full lifecycle of autonomous driving data: collection, ingestion, curation, machine learning training, and evaluation. You'll be a key decision-maker in defining best practices, data contracts, and integration patterns across upstream and downstream systems. This is a highly impactful role for someone who thrives at the intersection of technical leadership, system design, and cross-team collaboration, and who wants to elevate the capabilities of our data platform to support cutting-edge ML and autonomy development. At Applied Intuition, you will: Drive Data Platform Strategy- Define the long-term vision and technical roadmap for the data ecosystem, balancing scalability, reliability, cost efficiency, and developer experience Partner with Customers- Engage deeply with autonomy stack teams to gather requirements, uncover pain points, and translate them into platform capabilities Lead Complex Workflow Development- Architect and build end-to-end, large-scale ETL and data workflows for data collection, ingestion, transformation, and delivery Establish Data Contracts- Define and enforce clear SLAs and contracts with upstream data producers and downstream data consumers Set Best Practices- Champion data engineering best practices around governance, schema evolution, lineage, quality, and observability Mentor and Influence- Guide other engineers and teams on designing scalable data systems and making strategic technology choices Collaborate Across Functions- Work with infrastructure, ML platform, autonomy stack, and labeling teams to ensure smooth data flow and ecosystem integration We're looking for someone who has: 10+ years of experience in data engineering, distributed systems, or related backend engineering roles Proven track record of architecting and building large-scale, data-intensive, distributed systems Deep experience with complex ETL pipelines, data ingestion frameworks, and data processing engines (e.g., Spark, Flink, Airflow, Flyte, Kafka, etc) Strong understanding of data modeling, partitioning, schema evolution, and metadata management at scale Hands-on experience with cloud object stores (e.g., AWS S3), lakehouse architectures, and data warehouse technologies Ability to drive technical discussions with both engineers and non-technical stakeholders Strong communication and leadership skills, with the ability to influence across teams and functions Nice to have: Experience supporting ML/AI workflows at scale, from raw data ingestion to model training and evaluation Familiarity with data governance, lineage tracking, and observability tools Experience in autonomous systems, robotics, or other high-volume sensor data domains Contributions to open-source data infrastructure projects Why Join Us? You'll be at the heart of enabling autonomous driving innovation-building the systems that allow teams to harness massive amounts of real-world and simulated data for developing and validating our stack. You'll have the autonomy to set the technical direction, the opportunity to solve some of the hardest data engineering problems, and the ability to make a measurable impact on the success of our platform and products. Compensation at Applied Intuition for eligible roles includes base salary, equity, and benefits. Base salary is a single component of the total compensation package, which may also include equity in the form of options and/or restricted stock units, comprehensive health, dental, vision, life and disability insurance coverage, 401k retirement benefits with employer match, learning and wellness stipends, and paid time off. Note that benefits are subject to change and may vary based on jurisdiction of employment. Applied Intuition pay ranges reflect the minimum and maximum intended target base salary for new hire salaries for the position. The actual base salary offered to a successful candidate will additionally be influenced by a variety of factors including experience, credentials & certifications, educational attainment, skill level requirements, interview performance, and the level and scope of the position. Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the location listed is: $153,000 - $222,000 USD annually. Don't meet every single requirement? If you're excited about this role but your past experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right candidate for this or other roles. Applied Intuition is an equal opportunity employer and federal contractor or subcontractor. Consequently, the parties agree that, as applicable, they will abide by the requirements of 41 CFR 60-1.4(a), 41 CFR 60-300.5(a) and 41 CFR 60-741.5(a) and that these laws are incorporated herein by reference. These regulations prohibit discrimination against qualified individuals based on their status as protected veterans or individuals with disabilities, and prohibit discrimination against all individuals based on their race, color, religion, sex, sexual orientation, gender identity or national origin. These regulations require that covered prime contractors and subcontractors take affirmative action to employ and advance in employment individuals without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status or disability. The parties also agree that, as applicable, they will abide by the requirements of Executive Order 13496 (29 CFR Part 471, Appendix A to Subpart A), relating to the notice of employee rights under federal labor laws.

Posted 30+ days ago

Geico Insurance logo
Geico InsuranceKaty, TX
At GEICO, we offer a rewarding career where your ambitions are met with endless possibilities. Every day we honor our iconic brand by offering quality coverage to millions of customers and being there when they need us most. We thrive through relentless innovation to exceed our customers' expectations while making a real impact for our company through our shared purpose. When you join our company, we want you to feel valued, supported and proud to work here. That's why we offer The GEICO Pledge: Great Company, Great Culture, Great Rewards and Great Careers. GEICO is looking for a customer-obsessed and results-oriented Product Manager to support our Data Ingestion and Movement platform. This role will help drive product direction for our data ingestion, ETL/ELT pipelines, and data movement services, focusing on enabling reliable data flow into our lakehouse and other data stores. The ideal candidate will have a technical background in data engineering and experience delivering scalable data platforms and data pipeline solutions. Description As a Product Manager for Data Ingestion and Movement, you will be responsible for supporting the product vision and execution for GEICO's data ingestion and movement products. To successfully shape a platform that enables pipeline-as-a-service and supports a scalable data mesh architecture, a strong technical understanding of data pipelines, data integration patterns, data orchestration, ETL/ELT processes, and platform engineering is essential. Your goal is to abstract complexity and empower domain teams to autonomously and efficiently build, deploy, and govern data pipelines. This role also requires stakeholder management skills and the ability to bridge technical solutions with business value. Key Responsibilities Support the development and execution of data ingestion and movement platform vision aligned with business goals and customer needs Help create and maintain a clear, prioritized roadmap for data ingestion and movement capabilities that balances short-term delivery with long-term strategic objectives Support evangelizing the Data Ingestion and Movement platform across the organization and help drive stakeholder alignment Stay abreast of industry trends and competitive landscape (Apache Kafka, Apache Airflow, AWS Glue, Azure Data Factory, Google Cloud Dataflow, etc.) to inform data ingestion strategy Support requirement gathering and product strategy for data ingestion, ETL/ELT pipelines, and data movement services Understand end-to-end data ingestion workflows and how data movement fits into the broader data ecosystem and downstream analytics Support data governance initiatives for data lineage, quality, and compliance in data ingestion and movement processes Ensure data ingestion and movement processes adhere to regulatory, compliance, and data quality standards Partner with engineering on the development of data ingestion tools, pipeline orchestration services, and data movement capabilities Help define product capabilities for data ingestion, pipeline monitoring, error handling, and data quality validation to improve reliability and performance Support customer roadshows and training on data ingestion and movement capabilities Build instrumentation and observability into data ingestion and movement tools to enable data-driven product decisions and pipeline monitoring Work closely with engineering, data engineering, and data teams to ensure seamless delivery of data ingestion and movement products Partner with customer success, support, and engineering teams to create clear feedback loops Translate data ingestion and movement technical capabilities into business value and user benefits Support alignment across multiple stakeholders and teams in complex, ambiguous environments Qualifications Required Understanding of data ingestion patterns, ETL/ELT processes, and data pipeline architectures (Apache Kafka, Apache Airflow, Apache Spark, AWS Glue, etc.) Experience with data integration APIs, connectors, and data pipeline orchestration tools Basic understanding of data pipeline monitoring, observability, and data quality validation practices Experience in cloud data ecosystems (AWS, GCP, Azure) Proven analytical and problem-solving abilities with a data-driven approach to decision-making Experience working with Agile methodologies and tools (JIRA, Azure DevOps) Good communication, stakeholder management, and cross-functional collaboration skills Strong organizational skills with ability to manage product backlogs Preferred Previous experience as a software or data engineer is a plus Strong business acumen to prioritize features based on customer value and business impact Experience with data ingestion tools (Apache Kafka, Apache NiFi, AWS Kinesis, Azure Event Hubs, etc.) Knowledge of data lineage, data quality frameworks, and compliance requirements for data ingestion Insurance industry experience Experience Minimum 5+ years of technical product management experience building platforms that support data ingestion, ETL/ELT pipelines, data engineering, and data infrastructure Track record of delivering successful products in fast-paced environments Experience supporting complex, multi-stakeholder initiatives Proven ability to work with technical teams and translate business requirements into technical product specifications Experience with customer research, user interviews, and data-driven decision making Education Bachelor's degree in computer science, engineering, management information systems, or related technical field required MBA/MS or equivalent experience preferred Annual Salary $88,150.00 - $157,850.00 The above annual salary range is a general guideline. Multiple factors are taken into consideration to arrive at the final hourly rate/ annual salary to be offered to the selected candidate. Factors include, but are not limited to, the scope and responsibilities of the role, the selected candidate's work experience, education and training, the work location as well as market and business considerations. At this time, GEICO will not sponsor a new applicant for employment authorization for this position. The GEICO Pledge: Great Company: At GEICO, we help our customers through life's twists and turns. Our mission is to protect people when they need it most and we're constantly evolving to stay ahead of their needs. We're an iconic brand that thrives on innovation, exceeding our customers' expectations and enabling our collective success. From day one, you'll take on exciting challenges that help you grow and collaborate with dynamic teams who want to make a positive impact on people's lives. Great Careers: We offer a career where you can learn, grow, and thrive through personalized development programs, created with your career - and your potential - in mind. You'll have access to industry leading training, certification assistance, career mentorship and coaching with supportive leaders at all levels. Great Culture: We foster an inclusive culture of shared success, rooted in integrity, a bias for action and a winning mindset. Grounded by our core values, we have an an established culture of caring, inclusion, and belonging, that values different perspectives. Our teams are led by dynamic, multi-faceted teams led by supportive leaders, driven by performance excellence and unified under a shared purpose. As part of our culture, we also offer employee engagement and recognition programs that reward the positive impact our work makes on the lives of our customers. Great Rewards: We offer compensation and benefits built to enhance your physical well-being, mental and emotional health and financial future. Comprehensive Total Rewards program that offers personalized coverage tailor-made for you and your family's overall well-being. Financial benefits including market-competitive compensation; a 401K savings plan vested from day one that offers a 6% match; performance and recognition-based incentives; and tuition assistance. Access to additional benefits like mental healthcare as well as fertility and adoption assistance. Supports flexibility- We provide workplace flexibility as well as our GEICO Flex program, which offers the ability to work from anywhere in the US for up to four weeks per year. The equal employment opportunity policy of the GEICO Companies provides for a fair and equal employment opportunity for all associates and job applicants regardless of race, color, religious creed, national origin, ancestry, age, gender, pregnancy, sexual orientation, gender identity, marital status, familial status, disability or genetic information, in compliance with applicable federal, state and local law. GEICO hires and promotes individuals solely on the basis of their qualifications for the job to be filled. GEICO reasonably accommodates qualified individuals with disabilities to enable them to receive equal employment opportunity and/or perform the essential functions of the job, unless the accommodation would impose an undue hardship to the Company. This applies to all applicants and associates. GEICO also provides a work environment in which each associate is able to be productive and work to the best of their ability. We do not condone or tolerate an atmosphere of intimidation or harassment. We expect and require the cooperation of all associates in maintaining an atmosphere free from discrimination and harassment with mutual respect by and for all associates and applicants.

Posted 2 weeks ago

Guidehouse logo
GuidehouseMclean, VA
Job Family: Data Science Consulting Travel Required: Up to 25% Clearance Required: Ability to Obtain Public Trust What You Will Do: We are seeking an experienced Data Engineer to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical contributor, responsible for designing and implementing scalable data pipelines and interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation. This is a strategic delivery role for someone who thrives at the intersection of data engineering, cloud platforms, and public sector analytics. Client Leadership & Delivery Collaborate with FCA clients to understand data architecture and reporting needs. Lead the development of ETL pipelines and dashboard integrations using Databricks and Tableau. Ensure delivery excellence and measurable outcomes across data migration and visualization efforts. Solution Development & Innovation Design and implement scalable ETL/ELT pipelines using Spark, SQL, and Python. Develop and optimize Tableau dashboards aligned with federal reporting standards. Apply AI/ML tools to automate metadata extraction, clustering, and dashboard scripting. Practice & Team Leadership Work closely with data architects, analysts, and cloud engineers to deliver integrated solutions. Support documentation, testing, and deployment of data products. Mentor junior developers and contribute to reusable frameworks and accelerators. What You Will Need: US Citizenship is required Bachelor's degree is required Minimum FIVE (5) years of experience in data engineering and dashboard development Proven experience with Databricks, Tableau, and cloud platforms (AWS, Azure, GCP) Strong proficiency in SQL, Python, and Spark Experience building ETL pipelines and integrating data sources into reporting platforms Familiarity with data governance, metadata, and compliance frameworks Excellent communication, facilitation, and stakeholder engagement skills What Would Be Nice To Have: Master's Degree AI/LLM Certifications Experience working with FCA clients such as DOT, GSA, USDA, or similar Familiarity with federal contracting and procurement processes What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.

Posted 3 weeks ago

MassMutual Financial Group logo
MassMutual Financial GroupNew York, NY
The Opportunity Join our dynamic team as a Data Software Engineer - Corporate Technology Data Engineering & Analytics, where you'll play a pivotal role in leading the design, development, and administration of complex AWS environments and contribute to enterprise-wide data initiatives. The ideal candidate will bring experience working with global delivery teams, demonstrate thought leadership in data architecture, and actively mentor junior engineers to build internal capability. The Team You'll be an integral part of our esteemed Corporate Technology Team, focused on Data Engineering & Analytics. Our team operates on a global scale, driving innovation and excellence across diverse areas of expertise. As a Senior Business Systems Analyst, you'll play a pivotal role in high impact Corporate Technology Finance Initiatives, ensuring alignment with organizational objectives and driving impactful outcomes. This is an opportunity to collaborate closely with our Corp Technology leadership team as well as our CFO customers. Our team thrives on collaboration, innovation, and a shared commitment to excellence. Together, we're shaping the future of technology within our organization and making a lasting impact on a global scale. Join us and be part of a dynamic team where your contributions will be valued and your potential unleashed. The Impact: Lead the design, development, and deployment of end-to-end data engineering solutions on AWS, ensuring scalability, performance, and security. Administer and manage a broad range of AWS services such as EC2, S3, Glue, Lambda, IAM, and more in a production environment. Develop and maintain infrastructure as code using tools like AWS CloudFormation or Terraform. Design and orchestrate complex data pipelines using Apache Airflow or AWS Step Functions to enable reliable ELT/ETL processes. Implement data quality checks, monitoring, and alerting to ensure data integrity and reliability. Optimize data workflows for performance and cost-efficiency in the cloud. Drive the implementation of Operational Data Store (ODS) patterns and integrate with downstream data warehouse and analytics layers. Collaborate with global delivery teams across time zones to deliver high-quality, well-documented solutions on schedule. Develop and evaluate proof-of-concepts (POCs) for AWS-native and third-party tool integrations (e.g., Databricks, Snowflake). Conduct architecture reviews, provide code-level guidance, and establish best practices across cloud, data, and DevOps efforts. Mentor junior engineers and support team skill development through peer reviews, technical coaching, and design sessions. Participate in roadmap planning and contribute to long-term data platform strategy and technology selection. Ensure compliance with data governance, security, and operational standards across all engineering activities. The Minimum Qualifications Bachelor's in Computer Science, Engineering or related technical field 8+ years of experience building and managing Cloud solutions preferably in the AWS ecosystem, including infrastructure and data services 2+ years experience with understanding of data warehousing concepts, ODS, dimensional modeling, and scalable architecture design 2+ years experience with Terraform, Apache Airflow, Databricks or Snowflake in a production or large-scale prototype environment 2+ years experience in Python, SQL, and automation scripting 2+ years experience with containerization (Docker, ECS, EKS) The Ideal Qualifications Master's degree in Computer Science, Engineering or related field AWS Certifications (Solutions Architect, Data Analytics, or DevOps Engineer). Knowledge of streaming data technologies (e.g., Kinesis, Kafka) Exposure to data lake and data warehouse architectures Experience with monitoring and observability tools (e.g., CloudWatch, Datadog, Prometheus) Exposure to machine learning pipelines and MLOps tools (e.g., SageMaker, MLflow, Kubeflow) Demonstrated experience working with global delivery teams and cross-functional stakeholders Strong communication skills with a proven ability to work across functional teams Experience with data lake architecture, data governance frameworks, and modern metadata management Familiarity with modern DevOps practices including CI/CD pipelines, monitoring, and alerting in cloud environments Experience with data cataloging tools (e.g., AWS Glue Data Catalog, Apache Atlas) Understanding of data privacy regulations and compliance (e.g., GDPR, HIPAA) Exceptional communication and interpersonal skills Ability to influence and motivate teams without direct authority Excellent time management and organizational skills, with the ability to prioritize multiple initiatives #LI-RK1 Salary Range: $134,400.00-$176,400.00 At MassMutual, we focus on ensuring fair equitable pay, by providing competitive salaries, along with incentive and bonus opportunities for all employees. Your total compensation package includes either a bonus target or in a sales-focused role a Variable Incentive Compensation component. Why Join Us. We've been around since 1851. During our history, we've learned a few things about making sure our customers are our top priority. In order to meet and exceed their expectations, we must have the best people providing the best thinking, products and services. To accomplish this, we celebrate an inclusive, vibrant and diverse culture that encourages growth, openness and opportunities for everyone. A career with MassMutual means you will be part of a strong, stable and ethical business with industry leading pay and benefits. And your voice will always be heard. We help people secure their future and protect the ones they love. As a company owned by our policyowners, we are defined by mutuality and our vision to put customers first. It's more than our company structure - it's our way of life. We are a company of people protecting people. Our company exists because people are willing to share risk and resources, and rely on each other when it counts. At MassMutual, we Live Mutual. MassMutual is an Equal Employment Opportunity employer Minority/Female/Sexual Orientation/Gender Identity/Individual with Disability/Protected Veteran. We welcome all persons to apply. Note: Veterans are welcome to apply, regardless of their discharge status. If you need an accommodation to complete the application process, please contact us and share the specifics of the assistance you need. At MassMutual, we focus on ensuring fair, equitable pay by providing competitive salaries, along with incentive and bonus opportunities for all employees. Your total compensation package includes either a bonus target or in a sales-focused role a Variable Incentive Compensation component. For more information about our extensive benefits offerings please check out our Total Rewards at a Glance.

Posted 30+ days ago

C logo
C3 AI Inc.Tysons Corner, VA
C3 AI (NYSE: AI), is the Enterprise AI application software company. C3 AI delivers a family of fully integrated products including the C3 Agentic AI Platform, an end-to-end platform for developing, deploying, and operating enterprise AI applications, C3 AI applications, a portfolio of industry-specific SaaS enterprise AI applications that enable the digital transformation of organizations globally, and C3 Generative AI, a suite of domain-specific generative AI offerings for the enterprise. Learn more at: C3 AI As a Data Scientist / Senior Data Scientist - Federal Optimization, you will partner with some of the largest and most mission-critical organizations in the world to design and deliver the next generation of AI-powered enterprise applications. Our team focuses on developing scalable, explainable optimization models and algorithms tailored to federal use cases across domains such as logistics, operations, resource planning, and more. You'll work cross-functionally with data scientists, engineers, subject matter experts, and federal stakeholders to deliver full-lifecycle solutions: from translating client input into soft and hard constraints, to deploying robust, production-grade optimization tools on the C3 AI Suite. Qualified candidates should possess deep expertise in operations research and optimization. This role requires US Citizenship. Responsibilities: Research, design, implement, and deploy optimization solutions for enterprise applications leveraging the C3 AI Suite. Transform client requirements into mathematical formulations. Partner with cross-functional teams to translate optimization model insights into actionable strategies and measurable outcomes. Assist and enable federal customers to build their own optimization applications on the C3 AI Suite. Develop, maintain, and enhance optimization frameworks, libraries, and tools to ensure scalability and efficiency while contributing to the continuous improvement of the C3 AI Suite. Stay informed on state-of-the-art optimization techniques, promote best practices, and foster an innovative and collaborative work environment at C3 AI. Qualifications: U.S. Citizenship (and willingness to obtain a security clearance). Bachelor's in computer science, Electrical Engineering, Statistics, Operations Research or equivalent fields. Strong foundation in optimization techniques (e.g., LP, MILP, MINLP) and solvers. Strong mathematical foundation (linear algebra, calculus, statistics, probability). Proficiency in Python and experience with mathematical programming libraries. Excellent communication skills and ability to work independently or in teams. Motivated, curious, and eager to learn about federal mission domains. Preferred Qualifications: MS or PhD in Operations Research, Applied Mathematics, Computer Science, Industrial Engineering, or a related field. Active TS/SCI with CI or Full-Scope Polygraph. Professional experience applying optimization in federal or customer-facing environments. Familiarity with commercial solvers (e.g., Gurobi), Git, and GenAI tools. Understanding of machine learning, deep learning, or reinforcement learning. Portfolio of relevant projects or publications. C3 AI provides a competitive compensation package and excellent benefits. Candidates must be authorized to work in the United States without the need for current or future company sponsorship. C3 AI is proud to be an Equal Opportunity and Affirmative Action Employer. We do not discriminate on the basis of any legally protected characteristics, including disabled and veteran status.

Posted 30+ days ago

Guidehouse logo
GuidehouseArlington, VA
Job Family: Data Science Consulting Travel Required: Up to 25% Clearance Required: Ability to Obtain Public Trust What You Will Do: We are seeking an experienced Data Engineer to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical contributor, responsible for designing and implementing scalable data pipelines and interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation. This is a strategic delivery role for someone who thrives at the intersection of data engineering, cloud platforms, and public sector analytics. Client Leadership & Delivery Collaborate with FCA clients to understand data architecture and reporting needs. Lead the development of ETL pipelines and dashboard integrations using Databricks and Tableau. Ensure delivery excellence and measurable outcomes across data migration and visualization efforts. Solution Development & Innovation Design and implement scalable ETL/ELT pipelines using Spark, SQL, and Python. Develop and optimize Tableau dashboards aligned with federal reporting standards. Apply AI/ML tools to automate metadata extraction, clustering, and dashboard scripting. Practice & Team Leadership Work closely with data architects, analysts, and cloud engineers to deliver integrated solutions. Support documentation, testing, and deployment of data products. Mentor junior developers and contribute to reusable frameworks and accelerators. What You Will Need: US Citizenship is required Bachelor's degree is required Minimum FIVE (5) years of experience in data engineering and dashboard development Proven experience with Databricks, Tableau, and cloud platforms (AWS, Azure, GCP) Strong proficiency in SQL, Python, and Spark Experience building ETL pipelines and integrating data sources into reporting platforms Familiarity with data governance, metadata, and compliance frameworks Excellent communication, facilitation, and stakeholder engagement skills What Would Be Nice To Have: Master's Degree AI/LLM Certifications Experience working with FCA clients such as DOT, GSA, USDA, or similar Familiarity with federal contracting and procurement processes What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.

Posted 3 weeks ago

Bristol Myers Squibb logo
Bristol Myers SquibbSan Diego, CA
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us. Summary: As part of the Translational Data Products team, you will directly support translational medicine leaders in their mission to discover biomarkers that guide patient selection and treatment response for BMS assets. Your work will enable exploratory data analysis that drives crucial biomarker decisions at the heart of translational research. You will bridge data engineering with innovation: orchestrating advanced pipelines, ensuring auto-generated ETL and schema mappings are correct, and experimenting with the newest techniques-such as MCP servers, prompt engineering strategies (ReACT, chain-of-thought, etc.), and LLM-assisted tooling-to make biomarker data accessible, trustworthy, and actionable. Key Responsibilities: Enable biomarker discovery: Deliver data pipelines and mappings that help translational leaders identify biomarkers (molecular, digital, imaging) for patient stratification and treatment response. Innovate with AI/LLMs: Explore and apply cutting-edge approaches (MCP servers, prompt orchestration, auto-schema mapping, LLM-based ETL generation) to accelerate and improve data workflows. Data orchestration: Oversee ingestion from diverse sources (vendor feeds, raw instruments, CSV, PDF, etc.), ensuring automated ETL and sample-to-target mapping & transformation (STTM) outputs meet stakeholder needs. Quality and profiling: Assess and validate source data, documenting any cleaning, normalization of semantic mapping that needs to be applied for optimal QC, and identify where improvements are required vs merely convenient. Hands-on implementation: Build or adapt tools/scripts (Python, SQL, AWS Glue, Databricks, etc.) when automation falls short. Stakeholder collaboration: Act as a partner to translational medicine leaders-communicating progress, and brainstorming next steps as priorities evolve. Agile team contribution: Participate actively in standups, design sessions, sprint demos and innovation discussions. Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, Bioinformatics, or related field. 5+ years of experience in data engineering, ideally with exposure to life sciences or healthcare. Strong experience with data integration from heterogeneous sources (structured, semi-structured, unstructured). Proficiency in AWS, Python and SQL, with ability to prototype and automate workflows. Hands-on expertise with ETL frameworks (AWS Glue, Databricks, Airflow) Familiarity with modern AI/LLM approaches for data transformation and semantic mapping is highly desirable. Excellent communication skills to engage both technical and scientific stakeholders. Comfortable in agile, exploratory, scientific environments What Makes This Role Unique: Direct scientific impact: Your work connects directly to patient-centric translational decisions. Innovation: You are encouraged to explore new technologies and approaches, not just maintain existing ones. Automation first: Instead of building every pipeline from scratch, you orchestrate and validate auto-generated ETLs and mappings. Collaborative science + engineering: You will brainstorm with scientists, demo working solutions, and help shape the future of translational data products. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Compensation Overview: Cambridge Crossing: $148,850 - $180,374San Diego- CA - US: $148,850 - $180,374Tampa- FL - US: $135,320 - $163,976 The starting compensation range(s) for this role are listed above for a full-time employee (FTE) basis. Additional incentive cash and stock opportunities (based on eligibility) may be available. The starting pay rate takes into account characteristics of the job, such as required skills, where the job is performed, the employee's work schedule, job-related knowledge, and experience. Final, individual compensation will be decided based on demonstrated experience. Eligibility for specific benefits listed on our careers site may vary based on the job and location. For more on benefits, please visit https://careers.bms.com/life-at-bms/ . Benefit offerings are subject to the terms and conditions of the applicable plans then in effect and may include the following: Medical, pharmacy, dental and vision care. Wellbeing support such as the BMS Living Life Better program and employee assistance programs (EAP). Financial well-being resources and a 401(K). Financial protection benefits such as short- and long-term disability, life insurance, supplemental health insurance, business travel protection and survivor support. Work-life programs include paid national holidays and optional holidays, Global Shutdown Days between Christmas and New Year's holiday, up to 120 hours of paid vacation, up to two (2) paid days to volunteer, sick time off, and summer hours flexibility. Parental, caregiver, bereavement, and military leave. Family care services such as adoption and surrogacy reimbursement, fertility/infertility benefits, support for traveling mothers, and child, elder and pet care resources. Other perks like tuition reimbursement and a recognition program. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as "Transforming patients' lives through science ", every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com. Visit careers.bms.com/eeo-accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.

Posted 3 weeks ago

Data Society logo

Instructor, Data Science and Data Literacy (part-time contract on-site in Dallas)

Data SocietyDallas, TX

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

We Are:

At Data Society Group, we provide the highest quality, leading-edge, industry-tailored data and AI training and solutions for Fortune 1,000 companies and federal, state, and local governmental organizations. We partner with our clients to educate, equip, and empower their workforce with the skills they need to achieve their goals and expand their impact. Data Society Group publishes CDO Magazine, the preeminent global publication for Data Officers. Our executive boards include industry leaders, engineers, and data scientists from across the world. We are empowering the workforce of the future, from data literacy for all employees to support for data engineers and data scientists to train up on the most complex AI solutions and Machine Learning skills.

About the Role

Are you passionate about teaching and sharing your real-world experience in data science and AI? Join Data Society as a part-time Data Science and Data Literacy Instructor and help professionals around the world gain the skills they need to transform their careers and industries.

As an independent contractor, you’ll enjoy the flexibility to teach subjects you’re passionate about—on a schedule that fits your life—while being supported by a collaborative and communicative team. We maintain an active pool of contract instructors year-round, offering regular teaching opportunities.

You’ll lead engaging, interactive sessions for professionals across industries and skill levels, helping them understand how to apply data science, machine learning, and AI in practical, impactful ways. Your role is not just to instruct, but to inspire—creating a positive and supportive learning environment where learners leave empowered and ready to act.We are specifically recruiting for a client engagement on-site in Dallas, TX. If you're in a commutable distance to the Dallas area, we'd love to speak with you!

Key Responsibilities

  • Deliver high-quality, engaging instruction to professional audiences through live online or in-person sessions.

  • Prepare thoroughly for each course by reviewing and internalizing curriculum content.

  • Use interactive, hands-on teaching methods to support different learning styles.

  • Collaborate with instructional designers and fellow data scientists to continuously improve course content.

  • Communicate student needs, feedback, and insights to internal teams to enhance the learning experience.

  • Engage with students and client stakeholders to identify potential additional learning needs.

What You Bring

You’re not only a subject matter expert—you’re a passionate educator. You love simplifying complex topics and helping others build confidence in their abilities. You’re a great communicator, comfortable teaching adult learners in professional settings, including federal agencies and Fortune 500 companies.

Minimum Qualifications:

  • 2+ years of professional experience in a data-focused role.

  • Proficiency in at least one of the following areas:

    • R, Python, SQL

    • Data Governance

    • Artificial Intelligence / Machine Learning

    • Cloud Infrastructure (Azure, AWS) 

  • Experience teaching or tutoring adult learners (online and/or in-person).

  • Strong presentation and facilitation skills; confident teaching professionals and executives.

Powered by JazzHR

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall