landing_page-logo
  1. Home
  2. »All Job Categories
  3. »Data Science Jobs

Auto-apply to these data science jobs

We've scanned millions of jobs. Simply select your favorites, and we can fill out the applications for you.

Economics Expert AI Data Trainer-logo
Economics Expert AI Data Trainer
Invisible AgencyPhiladelphia, Pennsylvania
Are you an expert in Economics? Economics Expert AI Data Trainer Join the team powering the next generation of AI language models. Why This Matters Large‑scale language models are no longer just clever chatbots—they’re becoming powerful engines for mathematical and scientific discovery. With the right training data, tomorrow’s AI could: Democratize access to world‑class education Stay current on leading‑edge research Automate routine calculations, coding, and lab workflows for scientists everywhere That training data starts with you. Your Mission As an Expert , you’ll act as a subject‑matter “teacher” for the model: Write & solve problems in the domain of Economics . Design rubrics that define what a perfect answer looks like. Grade model outputs , pinpointing gaps in logic, ethics, or clarity. Coach the model to self‑evaluate, reason step‑by‑step, and unlock creativity. You’ll collaborate with fellow expert trainers, quality analysts, and AI researchers—directly shaping how cutting‑edge AI understands and reasons in the field of Economics . Experience We’re Looking For Must‑Have Graduate Degree in Economics (PhD or Masters) Native Level of Proficiency in English Nice‑to‑Have Peer‑reviewed research University teaching or high‑level tutoring Relevant Industry experience in Economics A Typical Day Data creation (core) – authoring and solving domain‑specific problems. Model assessment – scoring answers and refining evaluation criteria. Instruction refinement – rewriting prompts so the next trainer can work even faster. Quality & ethics reviews – flagging bias, inaccuracies, or unsafe content. Info syncs – short stand‑ups or workshops on new campaigns and findings. Who Thrives Here Critical thinkers who love deconstructing complex concepts. Clear communicators able to explain both what is wrong and why . Detail‑oriented professionals with a strong ethical compass. Agile multitaskers who enjoy switching between micro‑tasks and deep dives. Patient collaborators who give constructive, respectful feedback. Compensation $15 - $30 USD per hour Ready to turn your expertise in Economics into the knowledge base for tomorrow’s AI? Apply today and start teaching the model that will teach the world. Employment type: Contract Workplace type: Remote Seniority level: Mid‑Senior Level

Posted 30+ days ago

Senior Data Engineer-logo
Senior Data Engineer
BaselayerSan Francisco, California
About Baselayer: With experience across far-ranging banks, Fortune 500 tech co’s, fintech unicorns 🦄, and AI experts, Baselayer is built by financial institutions, for financial institutions. Started in 2023 by experienced founders Jonathan Awad and Timothy Hyde, Baselayer has raised $20 Million and hit $2 Million in ARR faster than any other identity company in history. Today more than 2,000 Financial Institutions and Government Agency customers later, Baselayer is revolutionizing the way businesses approach fraud prevention and compliance. 🏆 Check out their press release here → https://baselayerhq.com/press/ About You: You want to learn from the best of the best, get your hands dirty, and put in the work to hit your full potential. You’re not just doing it for the win—you’re doing it because you have something to prove and want to be great. You’re hungry to become an elite data engineer, designing rock-solid infrastructure that powers cutting-edge AI/ML products. You have 1–3 years of experience in data engineering, working with Python, SQL, and cloud-native data platforms You’ve built and maintained ETL/ELT pipelines, and you know what clean, scalable data architecture looks like You’re comfortable with structured and unstructured data, and you thrive on building systems that transform chaos into clarity You think in DAGs, love automating things with Airflow or dbt, and sweat the details when it comes to data integrity and reliability You’re curious about AI/ML infrastructure, and you want to be close to the action—feeding the models, not just cleaning up after them You value ethical data practices, especially when dealing with sensitive information in environments like KYC/KYB or financial services You’re a translator between technical and non-technical stakeholders, aligning infrastructure with business outcomes Highly feedback-oriented. We believe in radical candor and using feedback to get to the next level Proactive, ownership-driven, and unafraid of complexity—especially when there’s no playbook Responsibilities: Pipeline Development: Design, build, and maintain robust, scalable ETL/ELT pipelines that power analytics and ML use cases Data Infrastructure: Own the architecture and tooling for storing, processing, and querying large-scale datasets using cloud-based solutions (e.g., Snowflake, BigQuery, Redshift) Collaboration: Work closely with data scientists, ML engineers, and product teams to ensure reliable data delivery and feature readiness for modeling Monitoring & Quality: Implement rigorous data quality checks, observability tooling, and alerting systems to ensure data integrity across environments Data Modeling: Create efficient, reusable data models using tools like dbt, enabling self-service analytics and faster experimentation Security & Governance: Partner with security and compliance teams to ensure data pipelines adhere to regulatory standards (e.g., SOC 2, GDPR, KYC/KYB) Performance Optimization: Continuously optimize query performance and cost in cloud data warehouses Documentation & Communication: Maintain clear documentation and proactively share knowledge across teams Innovation & R&D: Stay on the cutting edge of data engineering tools, workflows, and best practices—bringing back what works and leveling up the team Benefits: Hybrid in SF. In office 3 days/week Flexible PTO Healthcare, 401K Smart, genuine, ambitious team Start date: April Salary Range: $135k – $220k + Equity - 0.05% – 0.25%

Posted 30+ days ago

Data Analyst-logo
Data Analyst
GuidehouseSpringfield, Virginia
Job Family : Data Science Consulting Travel Required : None Clearance Required : Active Top Secret SCI (TS/SCI) What You Will Do : This Data Analyst role will work with a business office at the National Geospatial-Intelligence Agency (NGA) to support analysis and management of Planning, Programming, Budgeting, and Execution (PPBE) data in accordance with DoD/Agency policies and requirements. This will include data visualization, business analytics, data management, and digital engineering processes. A strong understanding of Financial Management/PPBE and data analytics is essential to help the office effectively monitor and report on financial performance through reporting dashboards, tools, and the creation of data sets/key performance indicators. This role will be co-located with the client and requires excellent communication skills. Seeking a candidate with the ability to proactively identify program needs and help the office mature its visualizations and business analytics in line with technology/data solutions advancement. Specific duties will include: Apply extensive knowledge to integrate, develop, and maintain analytic models, visualizations, and tools to evaluate, analyze and communicate financial performance Provide statistical and mathematical support Develop and maintain Tableau dashboard visualizations to address business questions and cyclical financial reporting Leverage a range of data sources via Tableau Server, Excel, and custom SQL queries Provide insights that will be used to inform financial decisions Brief financial statuses/reports and visualization tools to a range of audiences from staff to Executive level Manage an inventory of implemented dashboards, other analytic products and current product backlog for implementation What You Will Need : An ACTIVE and MAINTAINED "TOP SECRET/SCI (TS/SCI)" Federal or DoD security clearance. Once onboard with Guidehouse, new hire MUST be able to OBTAIN and MAINTAIN TS/SCI Federal or DoD security clearance with a COUNTERINTELLIGENCE (CI) polygraph. Bachelor's Degree; Five (5) years of additional working experience supporting data analysis Some experience working with Government financial data, spend plans, or PPBE activities Demonstrated experience developing dashboards in Tableau What Would Be Nice To Have : Experience working with Tableau Server and Tableau Prep Demonstrated ability to proactively identify methods and approaches to expand and enhance the analytic capacity and ability of an existing portfolio Demonstrated experience working with commercial-off-the-shelf (COTS) statistical software or tools for data visualization Experience with SQL, MySQL, Python, SPSS, SAS, Visual Basic, or R to summarize statistical data and create documents, reports and presentations Demonstrated experience effectively communicating with various partners, stakeholders, or customers What We Offer : Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com . Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com . Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.

Posted 2 days ago

Sr. Data Integration Engineer-logo
Sr. Data Integration Engineer
Jackson LewisPhiladelphia, New York
Focused on employment and labor law since 1958, Jackson Lewis P.C.’s 1,000+ attorneys located in major cities nationwide consistently identify and respond to new ways workplace law intersects business. We help employers develop proactive strategies, strong policies and business-oriented solutions to cultivate high-functioning workforces that are engaged and stable, and share our clients’ goals to emphasize belonging and respect for the contributions of every employee. The Firm is ranked in the First Tier nationally in the category of Labor and Employment Litigation, as well as in both Employment Law and Labor Law on behalf of Management, in the U.S. News - Best Lawyers® “Best Law Firms”. We seek a Senior Data Integration Engineer who will report directly to the Data Integration and Data Warehouse Manager and play a critical role in building and optimizing the firm’s data integration pipelines . The ideal candidate will have deep hands-on expertise in designing, building, and managing complex data integration workflows, with a strong focus on tools like Boomi , Matillion , and other modern data integration platforms. The role requires expertise in API integration , Change Data Capture (CDC) , batch processing , delta processing , and experience working within enterprise bus architectures to ensure seamless data flow across systems. The Sr. Data Integration Engineer will lead the end-to-end integration process , ensuring data is accurately and efficiently moved between source systems, data warehouses, and business intelligence platforms while maintaining data quality, consistency , and timeliness . Key Responsibilities: Data Integration Pipeline Development : Design, build, and manage scalable and reliable data integration pipelines using tools like Boomi , Matillion , and Fivetran . Implement and optimize ETL/ELT workflows to ensure efficient data movement between internal and external systems. Develop real-time and batch integration processes that meet business requirements while ensuring data accuracy and timeliness . Lead the end-to-end development of integration solutions using Boomi, support SDLC during the design process, gathering and documenting business requirements, technical requirements, data mapping, plan and execute regression, integration testing, cutover, go-live and post go-live activities. API and Enterprise System Integration : Implement and maintain API-based integrations between various data sources and systems, including Salesforce , Workday , and Aderant . Ensure seamless two-way data synchronization between SQL databases and external APIs. Work closely with cross-functional teams to integrate API services into the firm’s data architecture. Change Data Capture (CDC) & Delta Processing : Implement and manage Change Data Capture (CDC) mechanisms across various databases (e.g., SQL, cloud databases) to track and propagate changes in real-time. Develop and implement batch and delta processing strategies to optimize data flow, minimize latency, and ensure timely updates. Ensure accurate and consistent data capture for historical data tracking and auditing purposes. Data Quality and Governance : Ensure high data quality across all integration processes, performing regular checks and validations to detect and correct any inconsistencies. Collaborate with the data warehouse and analytics teams to ensure data governance policies are followed, with a focus on security and compliance. Enterprise Bus Architecture : Contribute to designing and implementing the firm’s Enterprise Bus Architecture , ensuring smooth data exchange between various internal and external systems. Build and manage processes to propagate real-time changes from SQL tables and other systems to the enterprise bus. Ensure the enterprise bus efficiently handles data flows and event-driven processing across the firm’s data ecosystem. Performance Optimization : Continuously monitor and improve the performance of data integration pipelines to ensure optimal throughput and latency. Identify bottlenecks and inefficiencies in existing processes and propose solutions to improve data flow and processing time. Collaboration and Communication : Collaborate with cross-functional teams to gather requirements, provide updates, and deliver high-quality integration solutions meeting business needs and any reporting requirements Work closely with the data warehouse and analytics teams to ensure the data integration pipelines meet business needs and reporting requirements. Collaborate with external vendors and stakeholders to maintain integrations with third-party tools and systems (e.g., Aderant, Intapp, Workday). Provide technical documentation and training to other team members to ensure data integration systems' successful operation and maintenance. Required Skills & Experience: 7+ years of hands-on experience in data integration and ETL/ELT pipelines . Deep experience with data integration platforms, particularly: Boomi Matillion Fivetran DBT (or similar tools) Experience with Boomi connectors like SFTP, HTTP Client, DB V2, Azure, etc. Strong understanding of API integration and building real-time data synchronization between systems. Proven expertise in Change Data Capture (CDC) , batch processing , delta processing , and incremental data loading strategies. Experience with enterprise bus architecture for managing real-time data flow and event-driven integrations. Hands-on experience with SQL databases and cloud-based data warehousing platforms (e.g., Snowflake ). Proficiency in building data pipelines for systems like Salesforce , Workday , Aderant , and other cloud-based or enterprise applications. Strong problem-solving skills and the ability to troubleshoot complex data integration challenges. Excellent written and verbal communication skills to work effectively with both technical and non-technical stakeholders. Preferred Qualifications: Experience with legal domain systems such as Aderant , Intapp Time , Workday , and Compulaw . Familiarity with cloud platforms (AWS, Azure, or Google Cloud). Understanding of data governance and data security practices within the legal industry. Knowledge of data visualization tools (e.g., Tableau, Power BI) and how integrated data supports business reporting and analytics. Experience working within a regulated industry or compliance-heavy environment . Ability to understand and write any quick SQL and python scripts In-depth understanding of API design and development, including RESTful and SOAP protocols. Familiarity with different data formats, such as JSON, XML, and CSV, and experience with data transformation techniques. Solid knowledge of authentication and authorization mechanisms, security protocols, and data privacy considerations in integration projects. Why Join Us: Play a key role in building and maintaining critical data integration pipelines that power the firm’s data-driven decision-making. Collaborate with a talented team of data professionals and help shape the firm’s data integration and warehousing strategy. Competitive salary, benefits, and opportunities for professional growth within a fast-paced and innovative legal services firm . Be part of a team that embraces cutting-edge technologies and fosters a culture of innovation and collaboration . #LI-LM1 #LI-Hybrid For New York, the expected salary range for this position is between $ 151,400 and $ 177 ,000 per year . The actual compensation will be determined based on experience and other factors permitted by law. We are an equal employment opportunity employer. All qualified applicants will receive consideration for employment without regard to race, national origin, gender, age, religion, disability, sexual orientation, veteran status, marital status or any other characteristics protected by law.

Posted 3 days ago

Senior/Staff Software Engineer, Data (Backend)-logo
Senior/Staff Software Engineer, Data (Backend)
ViamNew York, NY
Viam helps companies unlock the power of AI, data and automation in the physical world. We provide a single platform for engineers of all disciplines to solve problems together and build solutions that are fast and future-proof. Viam powers solutions across robotics, food and beverage, climate tech, marine, industrial manufacturing, and more. Founded in 2020 by former MongoDB co-founder and CTO Eliot Horowitz, Viam is headquartered in New York City. Senior/Staff Software Engineer, Data (Backend) We're looking for a Senior or Staff Software Engineer to help us build the future of data infrastructure in the Viam ecosystem. Our team is responsible for the full data lifecycle – from capturing data on customer devices, to syncing it reliably to the cloud, storing it efficiently, and making it accessible through flexible query patterns. A key challenge we face is the highly arbitrary nature of the data we work with. This can include anything from images or point-clouds captured by robots, to streams of sensor readings, to structured outputs from customer applications. We're building scalable, generic systems to support this wide variety of data – and we’re looking for someone who enjoys solving these kinds of complex, open-ended problems. In this role, you will:  Work on performant, reliable systems that allow our users to capture large amounts of data from their smart machines and sync it to our cloud service. Design and implement data processing and querying systems, and developer-friendly APIs. This includes integrating with AI and ML features, enabling data visualization, and powering customer applications. Collaborate across the engineering org, influencing high-level architecture decisions, contributing to the engineering roadmap, working directly with senior leadership, and mentoring other engineers. We’re looking for someone who:  Has experience designing and building systems for processing, analyzing, and querying data. Bonus points if you have built such a system from the early stages, or the ground up. Has an affinity for turning product requirements into straightforward, high-quality, well-tested software. Is excited to work as a team to refine and execute the Data team’s vision. Has software engineering experience using languages such as Golang, C/C++, Java, or Python. Has experience working with cloud technology, such as gRPC, MongoDB, or other databases, blob storage, Google Cloud Platform, AWS, etc. Benefits: 100% covered medical/dental/vision insurance plans Competitive salary & equity packages (see below) Reproductive Health Benefits, including Fertility Benefits and Abortion Access Travel Benefits 25 days paid vacation and generous holiday observances One Medical Membership Citi Bike memberships Commuter benefits Monthly wellness stipend to be used for a variety of fitness-related items like gym memberships, fitness classes, fitness equipment, apparel, and more Free lunch every day that you’re in the office Paid parental leave The starting salary for this role is between $204,000 - $259,000/year. Your exact offer will vary based on a number of factors including experience level, skillset, market location, and balancing internal equity relative to peers at the company. We recognize that the person we hire may be less experienced, or more senior, than this job description as posted. In these situations, the updated salary range will be communicated with you as a candidate. In addition to cash compensation, Viam offers a comprehensive Total Rewards package that includes equity grants, health benefits, and more. Values: Vision Driven Collaborate Openly Act Decisively Succeed Through Diversity Hold Ourselves Accountable Lead with Curiosity Learn more about our  values here !  

Posted 30+ days ago

GenAI and LLM Architect, Data-logo
GenAI and LLM Architect, Data
Credera Experienced Hiring Job BoardDallas, TX
We are looking for an enthusiastic GenAI and LLM Architect to add to Credera’s Data capability group. Our ideal candidate is excited about leading project-based teams in a client facing role to analyze large data sets to derive insights through machine learning (ML) and artificial intelligence (AI) techniques.  They have strong experience in data preparation and analysis using a variety of tools and programming techniques, building and implementing models, and creating and running simulations. The architect should be familiar with the deployment of enterprise scale models into a production environment; this includes leveraging full development lifecycle best practices for both cloud and on-prem solutions across a variety of use cases. You will act as the primary architect and technical lead on projects to scope and estimate work streams, architect and model technical solutions to meet business requirements and serve as a technical expert in client communications. On a typical day, you might expect to participate in design sessions, provision environments, and coach and lead junior resources on projects. WHO YOU ARE: Proven experience in the architecture, design, and implementation of large scale and enterprise grade AI/ML solutions 5+ years of hands-on statistical modeling and/or analytical experience in and industry or consulting setting Master’s degree in statistics, mathematics, computer science or related field (a PhD is preferred)  Experience with a variety of ML and AI techniques (e.g. multivariate/logistic regression models, cluster analysis, predictive modeling, neural networks, deep learning, pricing models, decision trees, ensemble methods, etc.) Proficiency in programming languages such as Python, TensorFlow, PyTorch, or Hugging Face Transformers for model development and experimentation Strong understanding of NLP fundamentals, including tokenization, word embeddings, language modeling, sequence labeling, and text generation Experience with data processing using LangChain, data embedding using LLMs, Vector databases and prompt engineering Advanced knowledge of relational and non-relational databases (SQL, NoSQL) Proficient in large-scale distributed systems (Hadoop, Spark, etc.) Experience with designing and presenting compelling insights using visualization tools (RShiny, R, Python, Tableau, Power BI, D3.js, etc.)  Passion for leading teams and providing both formal and informal mentorship Experience with wrangling, exploring, transforming, and analyzing datasets of varying size and complexity  Knowledgeable of tools and processes to monitor model performance and data quality, including model tuning experience Strong communication and interpersonal skills, and the ability to engage customers at a business level in addition to a technical level Stay current with AI/ML trends and research; be a thought leader in AI area Experience with implementing machine learning models in production environments through one or more cloud platforms:  Google Cloud Platform  Azure cloud services  AWS cloud services  Basic Qualifications Thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success Contribute in a team-oriented environment Prioritize multiple tasks in order to consistently meet deadlines Creatively solve problems in an analytical environment Adapt to new environments, people, technologies and processes Excel in leadership, communication, and interpersonal skills Establish strong work relationships with clients and team members Generate ideas and understand different points of view  Learn More Credera is a global consulting firm that combines transformational consulting capabilities, deep industry knowledge, and AI and technology expertise to deliver valuable customer experiences and accelerated growth across a broad range of industries worldwide. Our one-of-a-kind global boutique approach means we provide our clients with tailored solutions unique to their organization that can scale due to our extensive footprint. As a values-led organization, our mission is to make an extraordinary impact on our clients, our people, and our community. We believe it is this approach that has allowed us to work with and transform the most influential brands and organizations in the world, from strategy through to execution. More information is available at www.credera.com .  We are part of the OPMG Group of Companies, a division of Omnicom Group Inc. Hybrid Work Model: Our employees have the flexibility to work remotely two days per week. We expect our team members to spend 3 days per week in person with the flexibility to choose the days and times that work best for both them and their project or internal teams. This could be at a Credera office or at the client site. You'll work closely with your project team to align on how you balance both the flexibility that we want to provide with the connection of being together to produce amazing results for our clients. The why: We are passionate about growing our people both personally and professionally. Our philosophy is that in-person engagement is critical for our ability to develop deep relationships with our clients and our team members – it's how we earn trust, learn from others, and ultimately become better consultants and professionals. Travel : Our goal is to keep out-of-market travel to a minimum and most projects do not require significant travel. While certain projects can require frequent travel (up to 80% for a period of time), our average travel percentage over a year for team members is typically between 10-30%. We try to take a personal approach to travel. You will submit your travel preferences which our staffing teams will take into account when aligning you to a role. Credera will never ask for money up front and will not use apps such as Facebook Messenger, WhatsApp or Google Hangouts for communicating with you. You should be very wary of, and carefully scrutinize, any job opportunity that asks for money prior to starting and/or one where all communications take place exclusively via chat.

Posted 30+ days ago

Architect, Data-logo
Architect, Data
Credera Experienced Hiring Job BoardHouston, TX
Credera is a global consulting firm that combines transformational consulting capabilities, deep industry knowledge, AI and technology expertise to deliver valuable customer experiences and accelerated growth across various industries. We continuously evolve our services to meet the needs of future organizations and reflect modern best practices. Our unique global approach provides tailored solutions, transforming the most influential brands and organizations worldwide.   Our employees, the lifeblood of our company, are passionate about making an extraordinary impact on our clients, colleagues, and communities. This passion drives how we spend our time, resources, and talents. Our commitment to our people and work has been recognized globally. Please visit our employer awards page:  https://www.credera.com/awards-and-recognition .   As an Architect in Credera’s Data capability group, you will lead teams in implementing modern data architecture, data engineering pipelines, and advanced analytical solutions. Our projects range from designing and implementing the latest data platform approaches (i.e. Lakehouse, DataOps, Data Mesh) using best practices and cloud solutions, building scalable data and ML pipelines, democratizing data through modern governance approaches, and delivering data products using advanced machine learning, visualization, and integration approaches. You will act as the primary architect and technical lead on projects to scope and estimate work streams, architect and model technical solutions to meet business requirements, serve as a technical expert in client communications, and mentor junior project team members. On a typical day, you might expect to participate in design sessions, build data structures for an enterprise data lake or statistical models for a machine learning algorithm, coach junior resources, and manage technical backlogs and release management tools. Additionally, you will seek out new business development opportunities at existing and new clients. WHO YOU ARE: You have a minimum of 5 years of technical, hands-on experience building, optimizing, and implementing data pipelines and architecture Experience leading teams to wrangle, explore, and analyze data to answer specific business questions and identify opportunities for improvement You are a highly driven professional and enjoy serving in a fast-paced, dynamic client-facing role where delivering solutions to exceed high expectations is a measure of success You have a passion for leading teams and providing both formal and informal mentorship You have strong communication and interpersonal skills, and the ability to engage customers at a business level in addition to a technical level You have a deep understanding of data governance and data privacy best practices You incorporate the usage of AI tooling, efficiencies, and code assistance tooling in your everyday workflows You have a degree in Computer Science, Computer Engineering, Engineering, Mathematics, Management Information Systems or a related field of study  The ideal candidate will have recent technical knowledge of the following: Programming languages (e.g. Python, Java, C++, Scala, etc.) SQL and NoSQL databases (MySQL, DynamoDB, CosmosDB, Cassandra, MongoDB, etc.)  Data pipeline and workflow management tools (Airflow, Dagster, AWS Step Functions, Azure Data Factory, etc.) Stream-processing systems (e.g. Storm, Spark-Streaming, Pulsar, Flink, etc.) Data Warehouse design (Databricks, Snowflake, Delta Lake, Lake formation, Iceberg) MLOps platforms (Sagemaker, Azure ML, Vertex.ai, MLFlow)  Container Orchestration (e.g. Kubernetes, Docker Swarm, etc.) Metadata management tools (Collibra, Atlas, DataHub, etc.)  Experience with the data platform components on one or more of the following cloud service providers: AWS Google Cloud Platform Azure Basic Qualifications Thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success Contribute in a team-oriented environment Prioritize multiple tasks in order to consistently meet deadlines Creatively solve problems in an analytical environment Adapt to new environments, people, technologies and processes Excel in leadership, communication, and interpersonal skills Establish strong work relationships with clients and team members Generate ideas and understand different points of view    Learn More: Credera is part of the Omnicom Precision Marketing Group (OPMG), a division of Omnicom Group Inc. OPMG is a global network of agencies that leverage data, technology, and CRM to create personalized and impactful customer experiences. OPMG offers a range of services, such as data-driven product / service design, technology strategy and implementation, CRM / loyalty strategy and activation, econometric and attribution modelling, technical and business consulting, and digital experience design and development.   Benefits: Credera provides a competitive salary and comprehensive benefits plan. Benefits include health, mental health, vision, dental, and life insurance, prescriptions, fertility and adoption benefits, community service days, paid parental leave, PTO, 14 paid holidays, matching 401(k), Healthcare & Dependent Flexible Spending Accounts, and disability benefits. For more information regarding Omnicom benefits, please visit www.omnicombenefits.com .    Hybrid Working Model : Our employees have the flexibility to work remotely two days a week. We expect team members to spend three days in person, with the freedom to choose the days and times that best suit them, their project, and their teams. You'll collaborate with your project team to balance flexibility with the benefits of in-person connection, delivering outstanding results for our clients.   The Why: In-person engagement is essential for building strong relationships with clients and colleagues. It fosters trust, encourages learning, and helps us grow as consultants and professionals.   Travel : For our consulting roles, o ur goal is to minimize travel , and most projects do not require extensive travel. While some projects may involve up to 80% travel for a period, the annual average for team members is typically 10%–30%. We take a personal approach to travel by considering your submitted preferences when assigning roles.   All qualified applicants will receive consideration for employment without regard to race, color, religion, gender identity, sexual orientation, national origin, age, genetic information, veteran status, or disability.   Credera will never ask for money up front and will not use apps such as Facebook Messenger, WhatsApp or Google Hangouts for communicating with you. You should be very wary of, and carefully scrutinize , any job opportunity that asks for money prior to starting and/or one where all communications take place exclusively via chat.   

Posted 30+ days ago

Senior Architect, GenAI and LLM, Data-logo
Senior Architect, GenAI and LLM, Data
Credera Experienced Hiring Job BoardChicago, IL
We are looking for an enthusiastic Senior GenAI and LLM Architect to add to Credera’s Data capability group. Our ideal candidate is excited about leading project-based teams in a client facing role to analyze large data sets to derive insights through machine learning (ML) and artificial intelligence (AI) techniques.  They have strong experience in data preparation and analysis using a variety of tools and programming techniques, building and implementing models, and creating and running simulations. The Senior Architect should be familiar with the deployment of enterprise scale models into a production environment; this includes leveraging full development lifecycle best practices for both cloud and on-prem solutions across a variety of use cases.   You will act as the primary architect and technical lead on projects to scope and estimate work streams, architect and model technical solutions to meet business requirements and serve as a technical expert in client communications. On a typical day, you might expect to participate in design sessions, provision environments, and coach and lead junior resources on projects.   WHO YOU ARE: 8+ years of proven experience in the architecture, design, and implementation of large scale and enterprise grade AI/ML solutions, including hands-on statistical modeling and/or analytical experience in and industry or consulting setting Master’s degree in statistics, mathematics, computer science or related field (a PhD is preferred)  Experience with a variety of ML and AI techniques (e.g. multivariate/logistic regression models, cluster analysis, predictive modeling, neural networks, deep learning, pricing models, decision trees, ensemble methods, etc.) Proficiency in programming languages such as Python, TensorFlow, PyTorch, or Hugging Face Transformers for model development and experimentation Strong understanding of NLP fundamentals, including tokenization, word embeddings, language modeling, sequence labeling, and text generation Experience with data processing using LangChain, data embedding using LLMs, Vector databases and prompt engineering Advanced knowledge of relational and non-relational databases (SQL, NoSQL) Proficient in large-scale distributed systems (Hadoop, Spark, etc.) Experience with designing and presenting compelling insights using visualization tools (RShiny, R, Python, Tableau, Power BI, D3.js, etc.)  Passion for leading teams and providing both formal and informal mentorship Experience with wrangling, exploring, transforming, and analyzing datasets of varying size and complexity  Knowledgeable of tools and processes to monitor model performance and data quality, including model tuning experience Strong communication and interpersonal skills, and the ability to engage customers at a business level in addition to a technical level Stay current with AI/ML trends and research; be a thought leader in AI area Experience with implementing machine learning models in production environments through one or more cloud platforms:  Google Cloud Platform  Azure cloud services  AWS cloud services    Basic Qualifications Thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success Contribute in a team-oriented environment Prioritize multiple tasks in order to consistently meet deadlines Creatively solve problems in an analytical environment Adapt to new environments, people, technologies and processes Excel in leadership, communication, and interpersonal skills Establish strong work relationships with clients and team members Generate ideas and understand different points of view    Learn More Credera is a global consulting firm that combines transformational consulting capabilities, deep industry knowledge, and AI and technology expertise to deliver valuable customer experiences and accelerated growth across a broad range of industries worldwide. Our one-of-a-kind global boutique approach means we provide our clients with tailored solutions unique to their organization that can scale due to our extensive footprint. As a values-led organization, our mission is to make an extraordinary impact on our clients, our people, and our community. We believe it is this approach that has allowed us to work with and transform the most influential brands and organizations in the world, from strategy through to execution. More information is available at www.credera.com .  We are part of the OPMG Group of Companies, a division of Omnicom Group Inc.   Hybrid Work Model: Our employees have the flexibility to work remotely two days per week. We expect our team members to spend 3 days per week in person with the flexibility to choose the days and times that work best for both them and their project or internal teams. This could be at a Credera office or at the client site. You'll work closely with your project team to align on how you balance both the flexibility that we want to provide with the connection of being together to produce amazing results for our clients. The Why: We are passionate about growing our people both personally and professionally. Our philosophy is that in-person engagement is critical for our ability to develop deep relationships with our clients and our team members – it's how we earn trust, learn from others, and ultimately become better consultants and professionals. Travel : Our goal is to keep out-of-market travel to a minimum and most projects do not require significant travel. While certain projects can require frequent travel (up to 80% for a period of time), our average travel percentage over a year for team members is typically between 10-30%. We try to take a personal approach to travel. You will submit your travel preferences which our staffing teams will take into account when aligning you to a role.   Credera will never ask for money up front and will not use apps such as Facebook Messenger, WhatsApp or Google Hangouts for communicating with you. You should be very wary of, and carefully scrutinize, any job opportunity that asks for money prior to starting and/or one where all communications take place exclusively via chat.

Posted 30+ days ago

Data Engineer-logo
Data Engineer
Foursquare New York, NY
About Foursquare  Foursquare is the leading independent location technology and data cloud platform dedicated to building meaningful bridges between digital spaces and physical places. Our proprietary technology unlocks the most accurate, trustworthy location data in the world, empowering businesses to answer key questions, uncover hidden insights, improve customer experiences, and achieve better business outcomes. A pioneer of the geo-location space, Foursquare’s location tech stack is being utilized by the world’s largest enterprises and most recognizable brands. About the Team:  As a data engineer on the Places team, you will contribute to the platform services and pipelines that facilitate large-scale data ingestion and governance. You will ship software with high visibility and of strategic importance to Foursquare, directly impacting revenue and the experience of our customers and Open Source community. You will focus on implementing and productionization of our ML models, working closely with our Data Science team to improve model performance and scalability. The Places team owns all components of our places dataset: from ingestion and data expansion, to delivery mechanisms like our APIs. We own and iterate on the core building blocks of our customer and Open Source Places Product offering, which lays the foundation for Fourquare’s other products and services. In this role you’ll: Influence key decisions on architecture and implementation of scalable, automated data processing workflows Build big data processing pipelines using Spark and Airflow Focus on performance, throughput, and latency, and drive these throughout our architecture Write test automation, conduct code reviews, and take end-to-end ownership of deployments to production Write, deploy, and monitor services for data access by systems across our infrastructure Participate in on-call rotation duties Act as a force multiplier, conducting code reviews, and coordinating cross-team efforts Implement and advocate for best practices in testing, code quality, and CI/CD pipelines What you’ll need: BS/BA in a technical field such as computer science or equivalent experience. 3+ years of experience in software development, working with production-level code. Proficiency in one or more of the programming languages we use: Python, Java or Scala Excellent communication skills, including the ability to identify and communicate data-driven insights. Self-driven and feel comfortable learning without much hand-holding Eagerness to learn new technologies Your own unique talents! If you don’t meet 100% of the qualifications outlined above, we encourage and welcome you to still apply! Nice to have: Experience with relational or document-oriented database systems, such as Postgres and MongoDB and experience writing SQL queries. Experience with cloud infrastructure services, such as AWS(S3, EMR, EC2, Glue, Athena, SQS, SNS) or GCP Experience with data processing technologies and tools, such as Spark, Hadoop(HDFS, Hive, MapReduce), Athena, Airflow, Luigi Our Tech Stack: Languages: Java, Scala, Python Tools for pipeline orchestration: Airflow, Luigi Data Processing Frameworks: Spark, MapReduce, Scalding At Foursquare, we are committed to providing competitive pay and benefits that are in line with industry and market standards.   Actual compensation packages are based on a wide array of factors unique to each candidate including but not limited to skill set, years & depth of experience, and specific office location. The annual total cash compensation range is _____________   however actual salaries can vary based on a candidate’s qualifications, skills and competencies, as well as location.  Salary is just one component of Foursquare’s total compensation package, which includes restricted stock units, multiple health insurance options, and a wide range of benefits! Benefits and Perks: Flexible PTO - rest and recharge when you need it! Industry Leading Healthcare - comprehensive and competitive health, vision, dental, life insurance Savings and Investments - 401(k) with company match Equipment Setup - you will receive all necessary hardware for your job function Family Planning and Fertility Programs - programs via Carrot Hybrid Work Schedule  for in-person collaboration on Tuesdays, Wednesdays, and Thursdays. Things to know… Foursquare is proud to foster an inclusive environment that is free from discrimination. We strongly believe in order to build the best products, we need a diversity of perspectives and backgrounds. This leads to a more delightful experience for our users and team members. We value listening to every voice and we encourage everyone to come be a part of building a company and products we love. Foursquare is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected Veteran status, or any other characteristic protected by law. Foursquare Privacy Policy #LI-HYBRID #LI-MM1

Posted 30+ days ago

Intern, Automation Data Engineer -logo
Intern, Automation Data Engineer
Cell Signaling TechnologyDANVERS, MA
Location: 3 Trask Lane, Danvers MA 01923 Internship Summary:  This internship opportunity is quite unique in the context of software development roles. Typically software engineers/data engineers are a bit removed from the end environment when they develop a product. This can cause pain points and bottlenecks. With this role the intern would be very hands on and work very closely with other team members and cross functional groups. Aside from the technical experience the intern would be able to gain, there are many professional and soft skills that they would be able to develop in this internship environment. Responsibilities:  Technical Highlights:  Basic DevOps practices and procedures  Programming experience in a professional environment  Prioritization exercises  Story writing and organizational exercise  Utilizing project management tools (JIRA)  Project Management  Github and version control  Cross functional collaboration  Reviewing legacy code and optimizing code from other engineers  Collaboration with other engineers  Ability to use their own creativity and ideas to create automated solutions  Project based internship  Exposure to laboratory practice and liquid handling systems Required Skills & Experience: Technical Ability:  Programming Ability: Proficiency In Python  Familiarity with Object Oriented Programming as well as familiarity with scripting  Basic command line skills  Data Engineering Basics: Database Experience (SQL, MySQL, AWS S3) Familiarity with ETL Processes (Extract, Transform, & Load)  Analytical and Problem Solving Skills: Data Modeling: Able to take raw data and efficiently model data structures to efficiently store, maintain, and retrieve data.  Has the ability to analyze a problem, understand the problem, and work to create innovative solutions to solve said problem.  Interpersonal and Team Skills: Enjoys working with others  Able to explain your own work on a fundamental and abstract level  Good communication skills and good organizational + prioritization skills  Preferred Qualifications: Technical Ability:  Programming Ability: Familiarity with data manipulation and general process automation (not necessarily in the context of biotech)  Github and Git is preferred  Data Engineering Basics: Familiarity with data pipeline tools (Kedro, ELK Stack)  Analytical and Problem Solving Skills: Familiarity with pandas data frames and csv python libraries  Interpersonal and Team Skills: Able to work cross functionally in an efficient manner  Interested in learning about outside groups workflows and processes Application Instructions: Applications will ONLY be accepted if they are submitted via the CST Career site and must include the following: A Resume  A link to a YouTube Video (60-second video introducing yourself and describing how an internship at CST will contribute to your future goals) 1 Letter of Recommendation Copy of an unofficial transcript Graduate students are not eligible for this program. Note: Instructions on uploading videos to YouTube can be found by following this link . Letters of Recommendation and Unofficial Transcripts may be emailed to internships@cellsignal.com. We will accept Letters of Recommendation and Unofficial Transcripts until July 31st, 2025. Projected Program Dates: August 30th, 2025 - December 15th, 2025 Cell Signaling Technology, Inc. is committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, national origin, age, disability, genetic information, status as a veteran or as a member of the military or status in any group protected by applicable federal or state laws.  

Posted 30+ days ago

Data Scientist-logo
Data Scientist
NT ConceptsVienna, VA
NTC OVERVIEW:   We are seeking a Data Scientist to join our team. Working at NT Concepts means that you are part of an innovative, agile company dedicated to solving the most critical challenges in National Security. We’re looking for the best and the brightest to join us in supporting this mission. If meaningful work, initiative, creativity, and continuous self-improvement are important to your career, join our growing team and discover What's Next for you. Mission Focus: As a Data Scientist, you will have the unique opportunity to research, design, and implement cutting edge algorithms for a program focused on protecting computer vision algorithms from adversarial AI attacks. This requires data curation, coding in Python with PyTorch, and production of model explainability and model performance visuals. Additionally, you will contribute to the program’s source code, implementing data curation and data science techniques. Our delivery teams are driven to explore new ideas and technology, and care deeply about collaboration, feedback, and iteration. We follow SAFe agile practices, embrace the Ops ethos (DataOps/DevSecOps/MLOps) to “automate-first”, use modern tech stacks, and constantly challenge each other to grow and improve.  If cutting edge data science projects resonate with you, and you care deeply about joining a mission-driven company with a strong growth direction and diverse culture, we'd love to learn more about you. Check out the details below, and let’s connect.  Technical members of our solutions teams require little guidance, but love to learn, collaborate, and problem solve. This position requires a junior to mid level of experience, a passion for mission support, and a strong desire to solve our customers’ hardest technical and data challenges.    Clearance : Ability to obtain a TS/SCI Clearance required. US Citizenship is required. Location/Flexibility: Vienna, VA with remote flexibility. Responsibilities:   Research, design, implementation, and evaluation of novel algorithms Implement algorithms in Python in repeatable and scalable way Drive requirements for data preprocessing ahead of algorithm development. Support creative delivery of solutions in a quickly evolving domain of AI/ML Qualifications: You have 2+ years of experience designing and implementing AI/ML techniques specifically those designed for imagery You have foundational knowledge of AI/ML methods and implementation strategies. You have worked with imagery data, both overhead or ground level imagery You desire a fast-paced, collaborative, Agile environment You are familiar with machine and deep learning libraries such as Scikit-learn and PyTorch You think critically about hard problems You are proficient with the Python programming language You have worked with Git version control systems You are no stranger to the Linux OS   Physical Requirements:   Prolonged periods sitting at a desk and working on a computer. Must be able to lift up to 10-15 pounds at times.   #CJ About NT Concepts Founded in 1998 and headquartered in the Washington DC Metro area, NT Concepts is a private, mid-tier company with clients spanning the Intelligence and Defense communities. We deliver end-to-end ​data and technology solutions ​that advance the modernization, transformation, and automation of the national security mission—solutions with real impact developed in a strong engineering culture that encourages technical growth, leadership, and creative “big idea” problem-solving. Employees are the core of NT Concepts. We understand that world-changing concepts happen in collaborative environments. We are a company where talented teams work together using innovation and expertise to solve our clients’ most critical challenges. Here, you’ll  gain competitive benefits , opportunities to bolster your skills and develop new abilities, and a company culture dedicated to support and service. In addition to our benefits program, we encourage our employees to take part in #NTC_GivesBack , which paves the way for positive social change. If joining a stable company with strong professional growth opportunities resonates with you, and you seek vital, mission-driven projects (for some pretty cool clients) that use your specific talents, we’d love to have you move forward with us.  

Posted 30+ days ago

Data Scientist -logo
Data Scientist
Bertram Capital ManagementFoster City, CA
Bertram Capital is a private equity firm targeting investments in lower middle market companies. Since its inception in 2006, the firm has raised over $3.5B of capital commitments. Bertram has distinguished itself in the private equity community by combining venture capital operating methodologies with private equity financial discipline to empower its portfolio companies to unlock their full business potential. This approach is unique in that Bertram is not singularly focused on achieving its investment returns through financial engineering and the extraction of near-term cash flow. Instead, Bertram focuses on reinvestment and technology enablement to drive growth and value through digital marketing, e-commerce, big data and analytics, application development and internal and external platform optimization. Visit www.bcap.com for more information. Position Description We are seeking a versatile Data Scientist to join our team. As a generalist, you will leverage your skills in data processing, modeling, visualization, and prompt engineering to evaluate potential investments, solve critical business challenges and identify growth opportunities across multiple industry verticals, including consumer, industrial, and business services. Your work will directly impact investment decisions, revenue growth, operational efficiency, and the seamless integration of add-on acquisitions. This is a unique opportunity to work across multiple businesses, partnering with the Bertram Capital investment team, as well as the marketing, sales, and operational teams at portfolio companies. Experience in the investment management or financial industry is an advantage, as is exposure to working with large language models (LLMs) like ChatGPT. Responsibilities: Data Analysis & Processing: Collect, clean, and preprocess large datasets from diverse sources to ensure data quality and usability. Model Development: Build, evaluate, and deploy predictive and descriptive models to solve business problems, such as customer segmentation, demand forecasting, and operational optimization. Data Visualization: Create compelling visualizations and dashboards to communicate insights effectively to both technical and non-technical users. Cross-Functional Collaboration: Partner with management teams to make data-based decisions. Work with other members of Bertram Labs to develop marketing campaigns and internal tools. Revenue Growth & Operational Efficiency: Use analytics to identify opportunities for revenue optimization, operational improvements, and streamlined M&A integration processes. Leverage new AI technologies: Drive adoption of emerging generative AI technology within Bertram Capital and portfolio companies. Industry Research: Stay updated on trends in the consumer, industrial, and business services sectors to inform data-driven strategies.   Qualifications BS, MS, or PhD. in a quantitative field or equivalent work experience. 2+ years working in data science. Exceptional communication skills. Experience working cross-functionally and collaboratively. Proficient in SQL, Python and typical data science libraries. Proficient in extracting data from databases, APIs, web-scraping, and/or scripting. Proficient in business intelligence tools such as Tableau, PowerBI, or Looker. Experience in prompting and using LLMs such as ChatGPT, Claude, Gemini etc. Compensation and Benefits The expected salary range for this position is: $180,000- $210,000 total annual compensation. Offered salary may be based on a variety of factors including skills, experience, and qualifications for the role. After one year of tenure, employees will receive an additional annual bonus.   Comprehensive medical, dental, and vision benefits are provided at no cost to the employee. We offer a generous 401K match as well as a “take what you need” PTO policy. Other perks include: cell phone stipend, engaging team events and holiday parties. If hired, employee will be in an “at-will position” and the Company reserves the right to modify base salary (as well as any other discretionary payment or compensation program) at any time, including for reasons related to individual performance, Company or individual department/team performance, and market factors. Diversity, Equity, and Inclusion At Bertram Capital we value and celebrate the many perspectives that arise from a variety of cultures, genders, religions, national origins, ages, abilities, socioeconomic status and sexual orientation. Our commitment to Diversity, Equity and Inclusion (DEI) ensures that Bertram is a place that attracts, grows, and promotes top talent from all backgrounds.    

Posted 30+ days ago

Sourcing Data Analyst -logo
Sourcing Data Analyst
ChargePointCampbell, CA
About Us With electric vehicles expected to be nearly 30% of new vehicle sales by 2025 and more than 50% by 2040, electric mobility is becoming a reality. ChargePoint (NYSE: CHPT) is at the center of this revolution, powering one of the world’s leading EV charging networks and a comprehensive set of hardware, software and mobile solutions for every charging need across North America and Europe. We bring together drivers, businesses, automakers, policymakers, utilities and other stakeholders to make e-mobility a global reality. Since our founding in 2007, ChargePoint has focused solely on making the transition to electric easy for businesses, fleets and drivers. ChargePoint offers a once-in-a-lifetime opportunity to create an all-electric future and a trillion-dollar market. At ChargePoint, we foster a positive and productive work environment by committing to live our values of Be Courageous, Charge Together, Love our Customers, Operate with Openness, and Relentlessly Pursue Awesome. These values guide how we show up every day, align, and work together to build a brighter future for all of us. Join the team that is building the EV charging industry and make your mark on how people and goods will get everywhere they need to go, in any context, for generations to come. Reports To Senior Director Global GSM Qualifications  Experience in data models and reporting packages Ability to analyze large datasets Ability to process large sets of data, write comprehensive reports and creating dashboards for the business to make sound decisions Strong verbal and written communication skills An analytical mind and inclination for problem-solving Attention to detail and results oriented Bachelor’s Degree or minimum of 8+ years proven experience in Data Analyst Role Advance Microsoft suites - Excel level, PowerPoint Requirements  Use of analytical / statistical programming tools for data extraction and summarization, statistics, visualization, and analysis. Ability to develop and modify queries to extract large amounts of data for standard and ad-hoc data requests Collect and review quotations from vendors; analyses and creates visual representations and summary reports of findings. Develops meaningful dashboards and presentations that use information to inform and influence business activities and strategies Create quotation packets for new product items Extract, sort, cleanse, aggregate and process data from multiple sources, developing queries and reports based on business requirements Work both independently and as part of a larger team supporting various internal customer groups on identifying business challenges and evaluating solutions to achieve objectives Collaborate with cross-functional teams to develop business cases, identify business problems, and understand desired business outcomes Completes analysis and applies judgment to derive recommendations for complex challenges and initiatives such best total cost comparisons Collect, gather from multiple input sources, analyzing and represent to larger stakeholders Prepares and presents standard and ad-hoc analysis to business partners that help guide decisions and support results Location Campbell, CA (this is an onsite position / 5 days a week) ChargePoint is committed to fair and equitable compensation practices. The targeted US salary range for roles at this operating level is $60,000  to $ 135,000 . This range represents base salary and does not reflect equity, benefits or variable pay where applicable. Actual base salaries are based on several factors unique to each candidate, including but not limited to skill set, experience, certifications and specific work location.     We are committed to an inclusive and diverse team. ChargePoint is an equal opportunity employer. We do not discriminate based on race, color, ethnicity, ancestry, national origin, religion, sex, gender, gender identity, gender expression, sexual orientation, age, disability, veteran status, genetic information, marital status or any legally protected status. If there is a match between your experiences/skills and the Company needs, we will contact you directly. ChargePoint is an equal opportunity employer.  Applicants only - Recruiting agencies do not contact.

Posted 1 week ago

Data Scientist SME-logo
Data Scientist SME
Dark Wolf SolutionsChantilly, VA
Dark Wolf Solutions  is seeking a highly motivated and experienced  Data Scientist SME who will be responsible for leading all aspects of budget formulations and execution, the IT portfolio management program and managing business transformation activities. The position requires specialized support across multiple projects. Responsibilities: Providing system and requirements integration with the components that perform the IT development/engineering of capabilities in support of an enterprise portfolio and project management tool.  Managing and supporting the overall financial and business efforts including a new initiative related to cost recovery model.  Assessing and analyzing data through conversations with IT and finance Subject Matter Experts (SMEs), as well as documenting and briefing data related to subject. Supporting a startup integrated security initiative with a focus on analyzing current processes, approvals and training while formulating a plan for the future. Communicating with security SMEs and developing a roadmap.  Assisting the integrate different requirements and capabilities from other offices across the broader community into a centralized system. Designing, testing, and integrating new security products as directed by the GTM. Integrating security products, including designs for all networks. Providing technical oversight and direction to designated boards for integrating new technology or major new mission capabilities into the environment, standards and structures. Engaging with customers to determine the nature of requirement/analytic problem, evaluate options, and offer Information Technology (IT)-based recommendations. Resolving complex IT problems efficiently and accurately while adhering to standards and procedures. Planning, testing, and integrating new and upgraded versions of the relevant IT components, systems, applications, tools. Identifying opportunities to integrate and improve resources to accomplish customer needs. Reviewing existing Information Technology (IT) programs and assist in making refinements, reducing operating time, and improve current techniques. Planning new designs for integration into a budget reconciliation database structure, using knowledge of the characteristics of the systems being added to the structure and the specifications for database interfaces to ensure effective integration and optimal database performance. Providing for the logical conversation of customer or product requirements into total system solutions.  Reviewing and acquiring data from primary or secondary cost recovery data sources and maintain databases/data systems. Identifying, analyzing, and interpreting trends or patterns in complex data sets to include expenditure plan requirements, and business process analysis. Providing insight into industry trends and make recommendations on future direction; identify new technologies and assess their technical and performance characteristics. Supporting the development and monitoring of overall community-wide policy and procedures to support mission goals and operations. Staying abreast of directives, regulations, guidance, notices and standards. Analyzing current processes, available tools in an effort to enhance the change management processes.  Responsible for creating and executing a communications plan that facilitates timely and transparent flow of information.  Responding to senior level leadership queries on activities. Responsible for the development of all types of documents and reports by creating and updating graphics presentations to improve the quality and enhance the usability of these documents. Required Qualifications: Possess 5+ years of experience managing and tracking budgetary funds. Over 5 years of experience in portfolio and resource management, optimizing resource allocation for maximum efficiency. Proven ability to deliver exceptional customer service, effectively communicating organizational accomplishments, status, and strategic direction to stakeholders at all levels. Successfully developed and implemented use cases that significantly improved information retrieval efficiency. Expertise in data analysis, requirements gathering, and translating complex data into clear, visual representations for effective communication. Proficient in developing and managing project schedules using MS Project, identifying critical tasks, and leveraging EVM for comprehensive project status monitoring. Skilled in resource allocation, optimizing assignments by identifying key resources and matching skill sets to project requirements. Experienced in developing Minimum Viable Increments (MVIs) based on thorough requirements gathering. Strong background in Business/Financial areas. Excellent written communication skills, consistently producing high-quality, publishable documents. Experienced in both development and integration projects, demonstrating versatility and adaptability. US Citizen with an active Top Secret/Sensitive Compartmented Information (TS/SCI) security clearance with polygraph. Desired Qualifications: Proficient in utilizing ServiceNow for IT service management. Familiar with the on-site customer environment and operational domain. Experienced with budget planning and execution processes within the organization. This position is located in Chantilly, VA.   The estimated salary range for this position is $170,000.00 - $210,000.00, commensurate on Clearance, technical skillset and overall experience.    We are proud to be an EEO/AA employer Minorities/Women/Veterans/Disabled and other protected categories. In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire.

Posted 1 day ago

Data Engineer - Monetization-logo
Data Engineer - Monetization
TwitchSan Francisco, CA
About Us Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and smash (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn  and  X ,  and discover the projects we’re solving on our  Blog . Be sure to explore our  Interviewing Guide  to learn how to ace our interview process. About the Role As a data engineer in the Monetization organization you will work on making Twitch’s massive data sets robust and consumable by people around the company. You’ll work with a team of other data engineers to build and maintain data pipelines that allow our stakeholders to understand our business. You can work from Twitch’s headquarters in San Francisco, CA. You Will: Work closely with data scientists and engineers to create robust data architectures and pipelines Develop and manage scalable data pipelines to extract, transform, and load data from various sources Collaborate with stakeholders to understand business requirements and translate them into data-driven solutions Automate data processing and reporting workflows to improve efficiency and data integrity Implement data quality checks and monitor processes to maintain high data accuracy  Simplify and enhance the accessibility, clarity, and usability of large and complex datasets through the development of data cubes and data sharing solutions Develop and manage scalable, automated, and fault-tolerant data solutions using technologies such as Spark, EMR, Python, Redshift, Glue, and S3 You Have: 1+ year of experience working in a data engineering, data science, or software engineering capacity Deep expertise in writing and maintaining robust data pipelines in SQL and Python Ability to turn data requirements from stakeholders into actionable plans Extensive experience with cloud computing tools, like AWS, Azure, or Google cloud Proven experience implementing best practices in data pipelines, ensuring accuracy, consistency, and reliability Bonus Points Bachelor's degree, or Master's degree in computer science, statistics, information systems, or a related, technical field Experience with a scripting language (e.g., Python, Java, or R) Familiarity and interest in Twitch Perks Medical, Dental, Vision & Disability Insurance 401(k) Maternity & Parental Leave Flexible PTO Amazon Employee Discount Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.   Job ID: TW8800 Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from our lowest geographic market up to our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience . Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. This position will remain open until filled. For more information, please visit  https://www.twitch.tv/jobs/en/#learn-more . Applicants should apply via our internal or external career site.   US Pay Per Year $91,200 — $185,000 USD Twitch is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Twitch values your privacy. Please consult our Candidate Privacy Notice , for information about how we collect, use, and disclose personal information of our candidates.

Posted 30+ days ago

Staff Software Engineer, Custodian Data-logo
Staff Software Engineer, Custodian Data
RidgelineReno, NV
Staff Software Engineer, Custodian Data Location:  Reno, NV, San Ramon, CA Do you have a passion for finance & investing? Are you interested in modeling the industry’s data and making it highly available? Are you a technical leader who enjoys refining both technology performance and team collaboration?  If so, we invite you to join our innovative team. As a Ridgeline Staff Software Engineer on our Custodian Data team, you’ll have the unique opportunity to build an industry-defining, fast, scalable custodian engine with full asset class support and global market coverage. You will be relied on for your technical leadership to help the team evolve our architecture, scale to meet our growth opportunity and exemplify software engineering best practices. Our team of engineers are building with cutting-edge technologies—including AI tools like GitHub Copilot and ChatGPT- in a fast-moving, creative, progressive work environment. You’ll be encouraged to think outside the box, bringing your own vision, passion, and insights to drive advancements that impact both our team and the industry. Our team is committed to creating a lasting impact on the investment management industry, leveraging AI and leading development practices to bring transformative change. You must be work authorized in the United States without the need for employer sponsorship. What you will do? Contribute domain knowledge, design skills, and technical expertise to a team where design, product, and engineering collaborate closely Be involved in the entire software development process, from requirements and design reviews to shipping code and observing how it lands with our customers. Impact a developing tech stack based on AWS back-end services Participate in the creation and construction of developer-based automation that leads to scalable, high-quality applications customers will depend on to run their businesses Coach, mentor, and inspire teams of product engineers who are responsible for delivering high-performing, secure enterprise applications Think creatively, own problems, seek solutions, and communicate clearly along the way Contribute to a collaborative environment deeply rooted in learning, teaching, and transparency Desired Skills and Experience 8+ years in a software engineering position with a history of architecting and designing new products and technologies Experience building cloud-native applications on AWS/Azure/Google Cloud A degree in Computer Science, Information Science, or a related discipline Extensive experience in Java or Kotlin Experience with API and Event design Background in high-availability systems Experience with L2, and L3 Support and participation in on-call rotation.  Experience with production instrumentation, observability, and performance monitoring Willingness to learn about new technologies while simultaneously developing expertise in a business domain/problem space Understand the value of automated tests at all levels  Ability to focus on short-term deliverables while maintaining a big-picture long-term perspective Serious interest in having fun at work Bonus : 3+ years of experience engineering in Data Pipeline, Reconciliation,  Market Data, or other Fintech applications Understanding of AWS services and infrastructure Experience with Docker or containerization Experience with agile development methodologies Experience with React Experience with caching Experience with data modeling Experience leading difficult technical projects that take multiple people and teams to complete Ability to handle multiple projects and prioritize effectively Excellent communication skills, both written and verbal Willingness to learn about cutting-edge technologies while cultivating expertise in a business domain/problem space An aptitude for problem-solving Ability to amplify the ideas of others Responsibility for delivering an excellent project that extends beyond coding Ability to adapt to a fast-paced and changing environment Compensation and Benefits  The typical starting salary range for new hires in this role is $174,000 - $220,000. Final compensation amounts are determined by multiple factors, including candidate experience and expertise, and may vary from the amount listed above.  As an employee at Ridgeline, you’ll have many opportunities for advancement in your career and can make a true impact on the product.  In addition to the base salary, 100% of Ridgeline employees can participate in our Company Stock Plan subject to the applicable Stock Option Agreement. We also offer rich benefits that reflect the kind of organization we want to be: one in which our employees feel valued and are inspired to bring their best selves to work. These include unlimited vacation, educational and wellness reimbursements, and $0 cost employee insurance plans. Please check out our  Careers page for a more comprehensive overview of our perks and benefits. About Ridgeline Ridgeline is the industry cloud platform for investment management. It was founded in 2017 by visionary entrepreneur Dave Duffield (co-founder of both PeopleSoft and Workday) to address the unique technology challenges of an industry in need of new thinking. We are building a modern platform in the public cloud, purpose-built for the investment management industry to empower businesses like never before.  Headquartered in Lake Tahoe with offices in Reno, Manhattan, and the Bay Area, Ridgeline is proud to have built a fast-growing, people-first company that has been recognized by Fast Company as a “Best Workplace for Innovators,” by LinkedIn as a “Top U.S. Startup,” and by The Software Report as a “Top 100 Software Company.” Ridgeline is proud to be a community-minded, discrimination-free equal opportunity workplace. Ridgeline processes the information you submit in connection with your application in accordance with the Ridgeline Applicant Privacy Statement . Please review the Ridgeline Applicant Privacy Statement in full to understand our privacy practices and contact us with any questions.  

Posted 1 day ago

Senior Data Analyst (Hybrid)-logo
Senior Data Analyst (Hybrid)
EmpassionNew York, NY
About Empassion Empassion is a Management Services Organization (MSO) focused on improving the quality of care and costs on an often neglected “Advanced illness/ end of life” patient population, representing 4 percent of the Medicare population but 25 percent of its costs. The impact is driven deeper by families who are left with minimal options and decreased time with their loved ones. Empassion enables increased access to tech-enabled proactive care while delivering superior outcomes for patients, their communities, the healthcare system, families, and society. The Opportunity Join our high-impact Data & Analytics team to shape a modern, flexible analytics platform that powers Empassion’s mission. As a Senior Data Analyst, you’ll collaborate with analytics engineers and cross-functional partners—Growth, Product, Operations, and Finance—to turn complex data into actionable insights. Using tools like SQL, dbt, and Looker, you’ll build pipelines, models, and dashboards that decode patient care journeys and amplify our value to partners. This is a chance to influence both internal strategy and external impact from day one. What You’ll Do 🌟 Partner with teams across the business to pinpoint analytics needs and deliver solutions that solve real problems. 🔍 Dig into proprietary app data and third-party sources (e.g., medical claims) to map care journeys, assess provider performance, and fuel growth strategies. 👥 Support growth and partner strategy by analyzing medical claims to size opportunities, evaluate program impact, and surface insights that inform sales conversations and expansion priorities. 🚀 Enhance and scale data models with SQL and dbt, ensuring precision and adaptability for new partnerships. 📊 Craft intuitive Looker dashboards and Explores with LookML, empowering self-serve access to trusted metrics. 🤝 Team up with Product and Tech to evolve reporting as our platform grows, working in shared dev environments. 📝 Document processes and train users—technical and non-technical—to maximize tool adoption. ⏳ Balance your time across modeling (dbt), dashboarding (Looker), and ad hoc analysis. What You’ll Bring - 2–6 years in data analytics or analytics engineering, with a knack for turning data into insights and visuals that drive decisions. - SQL mastery—writing efficient, reliable queries on complex datasets. - Hands-on experience with dbt for modeling and Looker for dashboards/LookML. - Strong communication to bridge technical and non-technical worlds—think engineers, operators, and external partners. - A proactive mindset, thriving in a fast-paced setting with iterative problem-solving. - Curiosity about operational workflows and a drive to partner with non-technical teams, ensuring data and reporting align with how the business actually runs. You're not just a spec-taker, you're part of the solution. - Curiosity about healthcare workflows and a passion for patient impact. - A collaborative spirit, eager to build scalable, user-friendly tools. Bonus Points - Knowledge of healthcare data (claims, ADT feeds, eligibility files). - Experience with internally built apps alongside Product/Engineering teams. - Familiarity with Git/GitHub for version control. - Early-stage startup experience (seed/Series A), especially mission-driven ones. Why Empassion? Impact: End your day knowing your work shapes patient care and family experiences. Growth: Expand your skills with a team that prioritizes internal promotions. Team: Work with top-tier clinicians, operators, and technologists. Flexibility: Remote-first with a hybrid NYC option (2x/week in-person). We sync via Slack/Zoom, meet for biannual offsites, and travel as needed to build trust and momentum. Our Culture We’re a tight-knit, passionate crew holding ourselves to high standards—because our data directly affects lives. We’re remote-first, U.S.-distributed , and NYC-hybrid, prioritizing clear deliverables and weekly alignment. Expect a dynamic environment where you’ll flex across modeling, reporting, and analysis to meet evolving needs. Ready to Make a Difference? If you’re driven by data, healthcare, and impact, apply and let’s talk!

Posted 30+ days ago

Software Engineer Intern - Data-logo
Software Engineer Intern - Data
PlusAISanta Clara, CA
We are seeking a Software Engineer Intern to join our team and contribute to the development of externally shareable metrics dashboards. This project aims to replace our current dashboard system with a more suitable solution that can facilitate the sharing of dashboards with our customers. The intern will be responsible for various aspects of this project, including data processing, data synchronization, metrics calculation, frontend chart development, data security, and access control. Responsibilities: Data Processing and Synchronization: Assist in the processing and synchronization of data to ensure that the dashboards are up-to-date and accurate. Metrics Calculation: Develop or contribute to the development of metrics calculation algorithms to ensure that the dashboards provide meaningful insights. Frontend Chart Development: Develop and maintain frontend charts using modern web technologies to ensure the dashboards are visually appealing and user-friendly. Data Security and Access Control: Implement and maintain data security measures to ensure that the dashboards are secure and that access is controlled appropriately. Collaboration and Communication: Work closely with the team and stakeholders to ensure clear communication and smooth project execution. Required Skills: MS or PhD student in Computer Science, Data Science, or a related field. Demonstrated experience in software development, preferably in a web environment. Proficiency in programming languages such as Python or JavaScript. Knowledge of data visualization tools (e.g., Superset, Tableau, or other similar platforms). Familiarity with web development frameworks (e.g., React, Angular, or Vue.js ). Understanding of data security and access control principles. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Ability to work independently and as part of a team. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.

Posted 2 weeks ago

AWS Data Engineer (Senior)-logo
AWS Data Engineer (Senior)
MactoresSeattle, WA
Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization. Mactores is seeking an AWS Data Engineer (Senior) to join our team. The ideal candidate will have extensive experience in PySpark and SQL, and have worked with data pipelines using Amazon EMR or Amazon Glue. The candidate must also have experience in data modeling and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, Presto, and orchestration experience using Airflow. What you will do? Develop and maintain data pipelines using Amazon EMR or Amazon Glue. Create data models and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, and Presto. Build and maintain the orchestration of data pipelines using Airflow. Collaborate with other teams to understand their data needs and help design solutions. Troubleshoot and optimize data pipelines and data models. Write and maintain PySpark and SQL scripts to extract, transform, and load data. Document and communicate technical solutions to both technical and non-technical audiences. Stay up-to-date with new AWS data technologies and evaluate their impact on our existing systems. What are we looking for? Bachelor's degree in Computer Science, Engineering, or a related field. 3+ years of experience working with PySpark and SQL. 2+ years of experience building and maintaining data pipelines using Amazon EMR or Amazon Glue. 2+ years of experience with data modeling and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, and Presto. 1+ years of experience building and maintaining the orchestration of data pipelines using Airflow. Strong problem-solving and troubleshooting skills. Excellent communication and collaboration skills. Ability to work independently and within a team environment. You are preferred if you have AWS Data Analytics Specialty Certification Experience with Agile development methodology Life at Mactores We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work. 1. Be one step ahead 2. Deliver the best 3. Be bold 4. Pay attention to the detail 5. Enjoy the challenge 6. Be curious and take action 7. Take leadership 8. Own it 9. Deliver value 10. Be collaborative We would like you to read more details about the work culture on https://mactores.com/careers The Path to Joining the Mactores Team At Mactores, our recruitment process is structured around three distinct stages: Pre-Employment Assessment: You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role. Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities. HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team. At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles. Note: Please answer as many questions as possible with this application to accelerate the hiring process.

Posted 30+ days ago

Health Information Technology (Hit) And Data Strategist - Vaccines - Southeast Region-logo
Health Information Technology (Hit) And Data Strategist - Vaccines - Southeast Region
SanofiCharleston, SC
Job title: Health Information Technology (HIT) and Data Strategist- Vaccines- Southeast Region Location: US Remote About the Job As a Health Information Technology (HIT) and Data Strategist, you will drive innovative technology initiatives with health systems and organized customers to improve vaccination rates. Working closely with Strategic Account Managers and National Account Directors, you'll help healthcare leadership optimize clinical workflows through existing technology investments (EHR, PHM, CRM, Digital Solutions) and transform data into actionable insights that improve operational outcomes. We are an innovative global healthcare company that helps the world stay ahead of infectious diseases by delivering more than 500 million vaccine doses a year. Across different countries, our talented teams are exploring new technologies to protect people and promote healthy communities. We chase the miracles of science every single day, pursuing progress to make a real impact on millions of patients around the world. Main Responsibilities: Strategic Partnership Development Build and nurture relationships with HIT decision-makers within health systems and organized customers to enhance access to Sanofi's immunization portfolio Align Sanofi's HIT capabilities with customer technology ecosystems to create mutual value Identify and develop market-shaping opportunities between Sanofi and relevant HIT vendors, influencers, and policy makers Technical Expertise & Workflow Optimization Conduct healthcare IT workflow assessments to identify opportunities for improving immunization processes before, during, and after patient visits Help customers leverage healthcare data analytics to improve operational efficiency and patient outcomes Provide technical guidance on HIT platforms, applications, and emerging trends Cross-Functional Collaboration Partner with internal teams to uncover opportunities and address unexpressed customer technology needs Participate in executive briefings alongside Account Managers to provide technical expertise Contribute to strategic business planning, including opportunity development and competitive strategies Ensure alignment across marketing, digital, medical affairs, market access, and operations teams About You Qualifications: Required Bachelor's degree in business administration, healthcare, information technology, or related field 10+ years of experience in health information technology, healthcare marketing, or commercial field Demonstrated ability to translate technical concepts into business value for healthcare stakeholders Preferred Advanced knowledge of healthcare IT platforms, particularly EHR systems (Epic certification a plus) Experience evaluating clinical workflows and implementing technology-based optimizations Background working with an HIT vendor, digital health company, or health system IT department Strong project management skills with ability to drive complex initiatives Exceptional communication and relationship-building capabilities MBA or other relevant advanced degree What Makes You Successful: Stretch: You challenge the status quo and pursue ambitious goals that transform healthcare delivery Take Action: You make decisions with appropriate urgency and adapt quickly in a changing environment Act for Patients & Customers: You prioritize patient outcomes and customer success in all initiatives Think Sanofi First: You collaborate across teams to deliver comprehensive solutions that advance our mission Why Choose Us? Bring the miracles of science to life alongside a supportive, future-focused team. Discover endless opportunities to grow your talent and drive your career, whether it's through a promotion or lateral move, at home or internationally. Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact. Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs and at least 14 weeks' gender-neutral parental leave. This position is eligible for a company car through the Company's FLEET program. Candidates must complete all fleet safety training and must maintain an acceptable driving record regarding accidents and incidents. Sanofi Inc. and its U.S. affiliates are Equal Opportunity and Affirmative Action employers committed to a culturally diverse workforce. All qualified applicants will receive consideration for employment without regard to race; color; creed; religion; national origin; age; ancestry; nationality; marital, domestic partnership or civil union status; sex, gender, gender identity or expression; affectional or sexual orientation; disability; veteran or military status or liability for military status; domestic violence victim status; atypical cellular or blood trait; genetic information (including the refusal to submit to genetic testing) or any other characteristic protected by law. #GD-SP #LI-SP #LI-Remote #vhd Pursue progress, discover extraordinary Better is out there. Better medications, better outcomes, better science. But progress doesn't happen without people - people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let's be those people. At Sanofi, we provide equal opportunities to all regardless of race, colour, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, ability or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! US and Puerto Rico Residents Only Sanofi Inc. and its U.S. affiliates are Equal Opportunity and Affirmative Action employers committed to a culturally inclusive and diverse workforce. All qualified applicants will receive consideration for employment without regard to race; color; creed; religion; national origin; age; ancestry; nationality; natural or protective hairstyles; marital, domestic partnership or civil union status; sex, gender, gender identity or expression; affectional or sexual orientation; disability; veteran or military status or liability for military status; domestic violence victim status; atypical cellular or blood trait; genetic information (including the refusal to submit to genetic testing) or any other characteristic protected by law. North America Applicants Only The salary range for this position is: $147,000.00 - $212,333.33 All compensation will be determined commensurate with demonstrated experience. Employees may be eligible to participate in Company employee benefit programs. Additional benefits information can be found through the LINK.

Posted 1 week ago

Invisible Agency logo
Economics Expert AI Data Trainer
Invisible AgencyPhiladelphia, Pennsylvania
Apply

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

Are you an expert in Economics?

Economics Expert AI Data Trainer

Join the team powering the next generation of AI language models.

Why This Matters

Large‑scale language models are no longer just clever chatbots—they’re becoming powerful engines for mathematical and scientific discovery. With the right training data, tomorrow’s AI could:

  • Democratize access to world‑class education

  • Stay current on leading‑edge research

  • Automate routine calculations, coding, and lab workflows for scientists everywhere

That training data starts with you.

Your Mission

As an Expert, you’ll act as a subject‑matter “teacher” for the model:

  • Write & solve problems in the domain of Economics.

  • Design rubrics that define what a perfect answer looks like.

  • Grade model outputs, pinpointing gaps in logic, ethics, or clarity.

  • Coach the model to self‑evaluate, reason step‑by‑step, and unlock creativity.

You’ll collaborate with fellow expert trainers, quality analysts, and AI researchers—directly shaping how cutting‑edge AI understands and reasons in the field of Economics.

Experience We’re Looking For

Must‑Have
Graduate Degree in Economics (PhD or Masters)
Native Level of Proficiency in English

Nice‑to‑Have

Peer‑reviewed research
University teaching or high‑level tutoring
Relevant Industry experience in Economics

 

A Typical Day

  • Data creation (core) – authoring and solving domain‑specific problems.

  • Model assessment – scoring answers and refining evaluation criteria.

  • Instruction refinement – rewriting prompts so the next trainer can work even faster.

  • Quality & ethics reviews – flagging bias, inaccuracies, or unsafe content.

  • Info syncs – short stand‑ups or workshops on new campaigns and findings.

Who Thrives Here

  • Critical thinkers who love deconstructing complex concepts.

  • Clear communicators able to explain both what is wrong and why.

  • Detail‑oriented professionals with a strong ethical compass.

  • Agile multitaskers who enjoy switching between micro‑tasks and deep dives.

  • Patient collaborators who give constructive, respectful feedback.

Compensation

$15 - $30 USD per hour

Ready to turn your expertise in Economics into the knowledge base for tomorrow’s AI?
Apply today and start teaching the model that will teach the world.

Employment type: Contract
Workplace type: Remote
Seniority level: Mid‑Senior Level