landing_page-logo
  1. Home
  2. »All Job Categories
  3. »Data Science Jobs

Auto-apply to these data science jobs

We've scanned millions of jobs. Simply select your favorites, and we can fill out the applications for you.

Product Manager - Data API-logo
The Weather CompanyAndover, Massachusetts
About The Weather Company: The Weather Company is the world’s leading weather provider, helping people and businesses make more informed decisions and take action in the face of weather. Together with advanced technology and AI, The Weather Company’s high-volume weather data, insights, advertising, and media solutions across the open web help people, businesses, and brands around the world prepare for and harness the power of weather in a scalable, privacy-forward way. The world’s most accurate forecaster globally, the company reaches hundreds of enterprise clients and more than 360 million monthly active users via its digital properties from The Weather Channel (weather.com) and Weather Underground (wunderground.com). Job brief: The Weather Company is seeking a strategic, data-driven, and customer-obsessed Enterprise API Product Manager to lead and scale our API portfolio. The weather API portfolio includes over a hundred industry-agnostic as well as industry-specific weather APIs that are sold across a growing number of sales channels to customers around the globe. This role will drive the strategy, roadmap, and execution of our Weather API product, ensuring it meets the needs of developers, businesses, and end-users. The ideal candidate will have a strong background in product management, API development, and building intuitive self-serve platforms, with a passion for leveraging weather data to solve real-world problems. Your success will be the result of your strong business and technical acumen, an experimentation mindset, and a scrappy, self-starting attitude to iterate quickly, deliver value, and unlock growth. The impact you'll make: Own the Weather API Product Portfolio : Manage and evolve a diverse set of APIs, including foundational data APIs and vertical-specific solutions, ensuring product-market fit across multiple industries. Product Strategy & Roadmap: Define and execute the vision, strategy, and roadmap for the weather data API and self-serve platform, aligning with business goals and customers' needs. API Development & Management: Collaborate with the API platform team to design, develop, and maintenance of a scalable, reliable, and secure weather data API, ensuring high performance, low latency, and robust documentation. Customer Journey Optimization: Drive the design and execution of the end-to-end product lifecycle—from customer discovery to trial, sale, and renewal—ensuring a frictionless journey and best-in-class developer experience. Self-Serve Platform Ownership: Lead the creation and optimization of a self-serve platform that enables developers and businesses to easily discover, access, and integrate the APIs, including features like API key management, usage analytics, and billing. API Marketplace Strategy: Partner with business leadership to execute strategies for API discovery, distribution, and monetization through marketplaces (e.g., AWS Marketplace, Snowflake, RapidAPI) and direct enterprise channels. Cross-Functional Leadership: Collaborate with engineering, marketing, sales, customer success, and partner teams to align roadmap execution and go-to-market strategy. Experimentation & Innovation: Embrace a test-and-learn approach to rapidly experiment, validate concepts, and refine offerings based on data and user feedback. Market & Trend Analysis: Stay at the forefront of API trends, standards, protocols, and competitive landscape to inform product direction, identify new opportunities, and maintain a competitive edge. Metrics & Impact: Define and track KPIs to measure product adoption, customer engagement, API performance, and revenue impact. What you've accomplished: 5+ years of product management experience, with at least 3 years focused on API products or developer platforms. Proven success in bringing APIs to market, ideally across both direct sales and marketplaces. Proven track record of designing and launching self-serve platforms, including features like user onboarding, subscription management, and analytics dashboards. Experience working with weather data, geospatial data, or similar data-intensive APIs is a strong plus. Strong understanding of API design principles (e.g., RESTful APIs), authentication protocols (OAuth, API keys), and developer tooling. Deep appreciation for the developer experience and technical documentation standards. Business-savvy and commercially minded with experience crafting pricing, packaging, and monetization strategies. Analytical and data-driven; a passion for unpacking customer insights to guide decisions. Excellent communication and collaboration skills; ability to rally cross-functional teams toward a common goal. Scrappy, self-starter attitude with a bias for action and ownership. Passionate about solving customer problems and delivering frictionless, scalable solutions. Experience with API-first companies, SaaS platforms, or data providers. Familiarity with cloud marketplaces and third-party developer ecosystems. TWCo Benefits/Perks: Flexible Time Off program Hybrid work model A variety of medical insurance options, including a $0 cost premium employee coverage Benefits effective day 1 of employment include a competitive 401K match with no vesting requirement, national health, dental, and vision plans Progressive family plan benefits An opportunity to work for a global and industry-leading technology company Impactful work in a collaborative environment

Posted 30+ days ago

Member of Technical Staff – Data Platform & Annotation Tools-logo
Inflection AIPalo Alto, CA
Inflection AI is a public benefit corporation leveraging our world class large language model to build the first AI platform focused on the needs of the enterprise.  Who we are: Inflection AI was re-founded in  March of 2024 and our leadership team has assembled a team of kind, innovative, and collaborative individuals focused on building enterprise AI solutions. We are an organization passionate about what we are building, enjoy working together and strive to hire people with diverse backgrounds and experience.  Our first product, Pi, provides an empathetic and conversational chatbot. Pi is a public instance of building from our 350B+ frontier model with our sophisticated fine-tuning (10M+ examples), inference, and orchestration platform. We are now focusing on building new systems that directly support the needs of enterprise customers using this same approach. Want to work with us? Have questions? Learn more below. About the Role As a Data Platform Engineer, you’ll design the systems and tools that transform raw data into the lifeblood of our models—clean, richly labeled, and continuously refreshing datasets. Your work will span scalable ingestion pipelines, active-learning loops, human-and-AI annotation workflows, and quality-control analytics. The platform you build will power every stage of the model lifecycle—from supervised fine-tuning to retrieval-augmented generation and reinforcement learning. This is a good role for you if you: Have hands-on experience building data or annotation platforms that support large-scale ML workloads Are fluent in Python, SQL, and modern data stacks (Spark/Flink, DuckDB/Polars, Arrow, Kafka/Airflow/Flyte) Understand how class balance, bias, leakage, and adversarial filtering impact ML data quality and model performance Have managed human-in-the-loop labeling operations—including vendor selection, rubric design, and LLM-assisted automation Care deeply about reproducibility and observability—tracking everything from dataset hashes to annotation agreement scores and drift detection Communicate clearly with both research scientists and non-technical stakeholders Responsibilities include: Ingest and transform large multimodal corpora (text, code, audio, vision) using scalable ETL, normalization, and deduplication pipelines Build annotation tools—web UIs, task queues, consensus engines, and review dashboards—to enable fast and accurate labeling by both crowd vendors and internal experts Design active-learning and RLHF data loops that surface high-value samples for human review, integrate synthetic LLM feedback, and support continuous iteration Version, audit, and govern datasets with lineage tracking, privacy controls, and automated quality metrics (toxicity, PII, brand consistency) Collaborate with training, inference, and safety teams to define data specs, evaluate dataset health, and unlock new model capabilities Contribute upstream to open-source data and annotation tools (e.g., Flyte, Airbyte, Label Studio) and share best practices with the community Employee Pay Disclosures At Inflection AI, we aim to attract and retain the best employees and compensate them in a way that appropriately and fairly values their individual contributions to the company. For this role, Inflection AI estimates a starting annual base salary will fall in the range of approximately $175,000 - $350,000 depending on experience. This estimate can vary based on the factors described above, so the actual starting annual base salary may be above or below this range.   Interview Process Apply: Please apply on Linkedin or our website for a specific role. After speaking with one of our recruiters, you’ll enter our structured interview process, which includes the following stages: Hiring Manager Conversation  – An initial discussion with the hiring manager to assess fit and alignment. Technical Interview  – A deep dive with an Inflection Engineer to evaluate your technical expertise. Onsite Interview  – A comprehensive assessment, including: A  domain-specific interview A  system design interview A final conversation with the  hiring manager Depending on the role, we may also ask you to complete a take-home exercise or deliver a presentation. For  non-technical roles , be prepared for a role-specific interview, such as a portfolio review. Decision Timeline We aim to provide feedback within one week of your final interview.    

Posted 30+ days ago

Data Engineer - Regulatory Reporting-logo
Clear StreetNew York, NY
About Clear Street:  Clear Street is building financial infrastructure for today’s institutions. Founded in 2018, Clear Street is an independent, non-bank prime broker replacing the legacy infrastructure used across capital markets.  We started from scratch by building a completely cloud-native clearing and custody system designed for today’s complex, global market. Our platform is fully integrated with central clearing houses and exchanges to support billions in trading volume per day. We’ve agonized about our data model abstractions, created horizontal scalability, and crafted thoughtful APIs. All so we can provide a best-in-class experience for our clients.  By combining highly-skilled product and engineering talent with seasoned finance professionals, we’re building the essentials to compete in today’s fast-paced markets.   The Team: The Control Engineering team at Clear Street is a small team with a great deal of responsibility. We work closely with stakeholders to ensure that we meet all our financial and regulatory obligations in a timely fashion. Our goal is to make the lives of our stakeholders easier by leveraging innovative technology to create automated solutions for their workflows. As Clear Street grows into more product lines and geographical regions, every member of the team will have an opportunity to have an immense impact on the firm as a whole.  Join our team and be part of a dynamic environment where you can make a significant contribution, collaborate with talented professionals, and work with cutting-edge technology to drive operational excellence.   The Role: As an experienced Software Engineer on the Controls Engineering team, you will play an integral role in automating our Compliance & Finance regulatory processes. You'll build on your analytical skills to create solutions that process large amounts of data from our data warehouse to generate clean, correct reporting, including building out reconciliations. You will partner with key stakeholders across the Compliance, Finance, and Treasury teams  to understand regulatory/financial obligations and business requirements, and translate them into clean designs and scalable solutions. As the tech lead in the team, you will provide technical guidance, prioritizing work, perform hands-on design and development and code review,  as well as evolve our technical standards and best practices. Tech Stack: Python, SQL, Snowflake, Retool, Docker, Kubernetes, Argo, Metaplane, REST APIs   Required Qualifications:  You have at least five (5) years of software design and development experience including CI/CD, source code control, testing, and quality management. You are highly proficient in Python,  SQL, database design, and have experience working with data warehouses like Snowflake to generate complex solutions at scale. You are a self-starter with a sense of urgency and an eagerness to learn and explore new technologies as appropriate to solve business problems. You are a strong communicator who can interact in a clear and concise manner with non-technical business stakeholders, product managers as well engineers. You have the ability to troubleshoot and logically assess problems and determine solutions. Bonus Qualifications: You have experience working in the post-trade automation space designing and architecting systems that deliver solutions to complex data problems. You have experience generating complex financial industry regulatory reports like EMIR, MFID,  SFTR, CPR, K Factor etc.   We Offer: The opportunity to join a growing team of good people, where you can make a difference. A meritocratic philosophy that champions collaboration. A new, high-quality code base with little technical debt and room to build new services and features. An environment that embraces the utility of a DevOps oriented culture and combines it with a focus on CI/CD methodology. Competitive compensation, benefits, and perks.   The Base Salary Range for this role is $140,000 - $190,000. This range is representative of the starting base salaries for this role at Clear Street. Where a candidate falls in this range will be based on job related factors such as relevant experience, skills, and location. This range represents Base Salary only, which is just one element of Clear Street's total compensation. The range stated does not include other factors of total compensation such as bonuses or equity. At Clear Street, we offer competitive compensation packages, company equity, 401k matching, gender neutral parental leave, and full medical, dental and vision insurance. Our belief has always been that we are better as a business when we are all together in person. As such, beginning on January 2, 2023, we are requiring employees to be in the office 4 days per week. In-office benefits include lunch stipends, fully stocked kitchens, happy hours, a great location, and amazing views. Our top priority is our people. We’re continuously investing in a culture that promotes collaboration. We help each other through challenges and celebrate each other's successes. We believe that modern workplaces succeed by virtue of having high-performance workforces that are diverse — in ideas, in cultures, and in experiences. We put in the effort to make such a workplace a daily reality and are proud to be an equal opportunity employer. #LI-Hybrid

Posted 30+ days ago

Data Scientist-logo
OwnwellAustin, TX
Company Background:  Ownwell has developed a leading end-to-end property tax solution that is purpose-built for SFR and CRE investors, operators, and property managers. We have brought Data Science and Machine Learning to a space that is ripe for disruption. We combine a best-in-class technology stack with local market expertise to reduce expenses, increase Net Operating Income, and drive operational efficiency for both our institutional clients and individual homeowners. Ownwell’s solution ensures you have the necessary tools, resources, and information to confidently manage your property taxes. Ownwell has been recognized both in Austin and Nationally, as a top workplace by the likes of Fortune, BuiltIn, Inc, and Best Places To Work. We are well-funded and venture-backed by some of the best investors in the world such as First Round Capital and Bessemer Venture Partners. Our customer base has grown by more than 300% year-over-year with exceptional feedback demonstrating clear product market fit. We are looking for driven and passionate team members who thrive in a collaborative, positive culture where we all win together. If this sounds like the place for you, come help us change the way everyday homeowners manage their real estate across the country. Our Culture People are our superpower! Centered in everything we do is a true sense of team. We listen and we learn from each other. We are on this rocketship together and embrace a fast-paced, truly collaborative environment. We are here to win as a team and as a company. We’ve brought together General Appraisers, Certified Public Accountants, Property Tax Consultants, Data Scientists, PhDs, best-in-class customer support representatives, and more to deliver top results for our customers. Our core values are our guiding principles in everything we do Customer Obsession Take Ownership Do The Right Thing Go Far Together Accelerate Innovation Meet The Engineering Team Ownwell’s engineering team is the backbone of our technology, building and scaling the systems that power our mission. We’re a tight-knit group of builders— engineers, scientists, and problem solvers—who collaborate closely with product, marketing, operations and sales teams to deliver tools that make a real impact for property owners. We value curiosity, ownership, and a willingness to tackle tough challenges head-on. If you enjoy working in a fast-paced environment where your ideas shape the future of the company, you’ll fit right in. As a Data Scientist at Ownwell, you’ll play a pivotal role in leveraging data to refine and enhance our proprietary technology. You will build predictive models, conduct analyses, and develop machine learning algorithms to uncover insights that drive strategic business decisions. Your work will directly support product innovation, marketing effectiveness, operational efficiency, and customer satisfaction. Responsibilities: Develop and deploy predictive models and machine learning solutions to optimize business processes and enable new products and use-cases. Analyze large, complex datasets to generate actionable insights, inform strategy, and improve operational efficiency. Collaborate closely with product teams, engineers, and business stakeholders to integrate data science solutions into customer-facing and internal systems. Design and execute experiments to validate and enhance product features and business initiatives. Continuously improve data quality, model performance, and data governance practices. Maintain and optimize data pipelines and analytical processes, ensuring scalability and reliability. Stay abreast of emerging trends in data science and machine learning methodologies to inform and elevate Ownwell’s data capabilities. Requirements: 2+ years of professional experience in data science or analytics roles. Proficiency in Python for data analysis and machine learning. We do not use R and are exclusively looking for people who are proficient in Python. Familiarity with SQL, including querying databases, data manipulation, and relational database concepts. Experience developing and deploying machine learning models in production environments. Understanding of statistical methods and experiment design. Familiarity with cloud services, such as AWS, for data processing and model deployment. Excellent communication skills, with the ability to clearly present analytical findings to non-technical stakeholders. Ownwell offerings Entrepreneurial culture. Own your career; we are here to support you in the journey. Access to First Round Network to build your community outside of Ownwell. Flexible PTO. We believe in giving you the flexibility to own your time off. In addition to flexible time off, you will get 11 company holidays. We offer the last week of the year to recharge and reset. Competitive health benefits. We care for you and your family's health, as reflected in our benefits coverage. Learning support through a $1,000 stipend per year to enable investing in your individual learning needs. Supporting parental journey. We offer up to 16 weeks of fully paid parental and bonding leave to support your journey as a new parent. As applicable complimentary real estate and tax consulting licensing and renewal Ownwell's vision is to democratize access to real estate expertise. When we say we want to provide access, we mean providing access to everyone. To do that well, we need a team that's broadly representative. We welcome people from all backgrounds, ethnicities, cultures, and experiences. Ownwell is an equal opportunity employer. We do not discriminate on the basis of race, color, ancestry, religion, national origin, sexual orientation, age, citizenship, marital or family status, disability, gender identity or expression, veteran status, or any other status.  

Posted 30+ days ago

D
Dynamis, Inc.Huntsville, AL
The Data Architect for the DeCPTR-Nuclear project is responsible for designing and implementing a secure, centralized data architecture essential for nuclear radiation survivability testing. This role involves creating a robust framework that ensures efficient storage, retrieval, and management of test data, in compliance with ISO 9001 standards and MDA guidance. The Data Architect will collaborate with Data Scientists and Information Systems (IS) Business Analysts to ensure seamless data integration, accessibility, and analysis, supporting the project's strategic objectives and advancement.  Responsibilities: Data Architecture Design: Develop and maintain a scalable data architecture framework that supports the project's data management needs.  Infrastructure Implementation: Oversee the implementation of data storage solutions, ensuring they are secure, efficient, and compliant with industry standards.  Collaboration with Data Scientist: Work closely with Data Scientist to ensure that the architecture supports advanced data analysis and modeling, providing the necessary infrastructure for data-driven insights.  Collaboration with IS Business Analyst: Partner with IS Business Analyst to design information systems that facilitate data flow, integration, and accessibility across varied platforms and stakeholders.  Data Integrity and Security: Implement best practices for data integrity, security, and compliance, conducting regular audits and updates as necessary.  Optimization: Continuously assess and optimize the data architecture to enhance performance and support evolving project requirements.  Requirements: U.S. Citizenship required Bachelor’s Degree required in Computer Science, Information Technology, Data Science, or a related field.  A minimum of 5-8 years of experience in data management, database design, or IT infrastructure, preferably within the defense or aerospace sectors. Proficiency in database technologies (e.g., SQL, NoSQL), data modeling, architectures, cloud services, and big data technologies. Ability to design data models and architectures that support business needs, ensuring data integrity and accessibility.  Certifications  Certified Data Management Professional (CDMP)  AWS Certified Solutions Architect or similar cloud platform certifications  Preferred: Technical Expertise: Strong understanding of database management systems, data warehousing, and ETL (Extract, Transform, Load) processes. Proficiency in cloud services and big data technologies.  Analytical Skills: Ability to design data models and architectures that support business needs, ensuring data integrity and accessibility.  Communication Skills: Excellent ability to communicate complex technical ideas to both technical and non-technical stakeholders.  Problem-Solving Skills: Proficient in troubleshooting complex data issues and designing scalable, efficient data solutions.  Project Management: Experience with project management methodologies and tools, including Agile or Lean practices.  Compliance: Familiarity with ISO 9001 quality management standards and DoD regulatory requirements related to data management. 

Posted 30+ days ago

Principal Software Engineer, Data Persistence-logo
RidgelineNew York, NY
Principal Software Engineer, Data Persistence Location:  New York, NY Are you a systems-minded engineer obsessed with consistency, availability, and durability at scale? Do you enjoy working deep in the stack to build resilient, multi-tenant infrastructure that supports real-time mission-critical workloads? Are you excited to contribute to distributed systems that are performant, reliable, and secure—especially when it matters most? If so, we invite you to be a part of our innovative team. As a Principal Software Engineer on Ridgeline’s Data Persistence team, you will design, build, and evolve the core systems that underpin our data platform—ensuring reliability, performance, and elasticity across a complex, multi-region architecture. You’ll lead major architectural decisions, optimize distributed storage systems, and ensure compliance with strict regulatory standards like SEC and SOC2. This is a high-impact role on a small, specialized team where your work directly enables critical business operations for investment managers. You’ll leverage cutting-edge technologies—including AI-powered tools like GitHub Copilot and ChatGPT—to accelerate development and drive high-quality outcomes at scale. You must be work authorized in the United States without the need for employer sponsorship.   What will you do? Design and evolve our distributed database architecture, including storage engines, query layers, and consistency models. Evaluate and optimize write/read paths, indexing strategies, replication mechanisms, and failover recovery techniques. Lead the strategic roadmap for how we scale our multi-tenant, microservice-based architecture while ensuring strong guarantees (consistency, availability, durability). Partner with product, SRE, and platform teams to shape the future of our persistence, observability, and data access patterns. Optimize database infrastructure for cost-efficiency, balancing performance and scalability to improve platform margins at scale. Mentor senior engineers and serve as a thought leader across the organization. Desired Skills and Experience 15+ years of experience in software engineering with a strong focus on database systems. Have authored or deeply contributed to high-performance distributed systems, databases, or storage engines. Possess deep fluency in CAP theorem tradeoffs, Raft/Paxos, LSM vs B-Tree internals, compaction strategies, and query execution plans. Have production experience scaling systems that handle TBs–PBs of data across multiple regions or data centers. Are comfortable navigating between practical tradeoffs and theoretical foundations in your technical decision-making. Can clearly articulate complex systems to diverse audiences and influence engineering direction across orgs. Strong hands-on experience with AWS cloud-native architectures, including services like Aurora Global Database, S3, Route53, and lambda. Strong experience with database observability tools like Dataddog DBM or equivalent. Proficiency in at least one programming language (Kotlin, Java, Python, TypeScript).  Nice to Have Contributions to open-source database technologies (e.g., PostgreSQL, ClickHouse, RocksDB). Experience with hybrid transactional/analytical processing (HTAP) systems or stream processing architectures. Familiarity with emerging trends like vector databases, CRDTs, or columnar storage engines. Author, speaker, or deep contributor to technical books, blogs, or conference talks.   About Ridgeline Ridgeline is the industry cloud platform for investment management. It was founded in 2017 by visionary entrepreneur Dave Duffield (co-founder of both PeopleSoft and Workday) to address the unique technology challenges of an industry in need of new thinking. We are building a modern platform in the public cloud, purpose-built for the investment management industry to empower business like never before.  Headquartered in Lake Tahoe with offices in Reno, Manhattan, and the Bay Area, Ridgeline is proud to have built a fast-growing, people-first company that has been recognized by Fast Company as a “Best Workplace for Innovators,” by LinkedIn as a “Top U.S. Startup,” and by The Software Report as a “Top 100 Software Company.” Ridgeline is proud to be a community-minded, discrimination-free equal opportunity workplace. Ridgeline processes the information you submit in connection with your application in accordance with the Ridgeline Applicant Privacy Statement . Please review the Ridgeline Applicant Privacy Statement in full to understand our privacy practices and contact us with any questions. Compensation and Benefits    The typical starting salary range for new hires in this role is targeted at $255,000 - $300,000. Final compensation amounts are determined by multiple factors, including candidate experience and expertise, and may vary from the amount listed above.  As an employee at Ridgeline, you’ll have many opportunities for advancement in your career and can make a true impact on the product.  In addition to the base salary, 100% of Ridgeline employees can participate in our Company Stock Plan subject to the applicable Stock Option Agreement. We also offer rich benefits that reflect the kind of organization we want to be: one in which our employees feel valued and are inspired to bring their best selves to work. These include unlimited vacation, educational and wellness reimbursements, and $0 cost employee insurance plans. Please check out our  Careers page for a more comprehensive overview of our perks and benefits.   #LI-Hybrid  

Posted 1 week ago

Staff Software Engineer, Custodian Data-logo
RidgelineReno, NV
Staff Software Engineer, Custodian Data Location:  Reno, NV, San Ramon, CA Do you have a passion for finance & investing? Are you interested in modeling the industry’s data and making it highly available? Are you a technical leader who enjoys refining both technology performance and team collaboration?  If so, we invite you to join our innovative team. As a Ridgeline Staff Software Engineer on our Custodian Data team, you’ll have the unique opportunity to build an industry-defining, fast, scalable custodian engine with full asset class support and global market coverage. You will be relied on for your technical leadership to help the team evolve our architecture, scale to meet our growth opportunity and exemplify software engineering best practices. Our team of engineers are building with cutting-edge technologies—including AI tools like GitHub Copilot and ChatGPT- in a fast-moving, creative, progressive work environment. You’ll be encouraged to think outside the box, bringing your own vision, passion, and insights to drive advancements that impact both our team and the industry. Our team is committed to creating a lasting impact on the investment management industry, leveraging AI and leading development practices to bring transformative change. You must be work authorized in the United States without the need for employer sponsorship. What you will do? Contribute domain knowledge, design skills, and technical expertise to a team where design, product, and engineering collaborate closely Be involved in the entire software development process, from requirements and design reviews to shipping code and observing how it lands with our customers. Impact a developing tech stack based on AWS back-end services Participate in the creation and construction of developer-based automation that leads to scalable, high-quality applications customers will depend on to run their businesses Coach, mentor, and inspire teams of product engineers who are responsible for delivering high-performing, secure enterprise applications Think creatively, own problems, seek solutions, and communicate clearly along the way Contribute to a collaborative environment deeply rooted in learning, teaching, and transparency Desired Skills and Experience 8+ years in a software engineering position with a history of architecting and designing new products and technologies Experience building cloud-native applications on AWS/Azure/Google Cloud A degree in Computer Science, Information Science, or a related discipline Extensive experience in Java or Kotlin Experience with API and Event design Background in high-availability systems Experience with L2, and L3 Support and participation in on-call rotation.  Experience with production instrumentation, observability, and performance monitoring Willingness to learn about new technologies while simultaneously developing expertise in a business domain/problem space Understand the value of automated tests at all levels  Ability to focus on short-term deliverables while maintaining a big-picture long-term perspective Serious interest in having fun at work Bonus : 3+ years of experience engineering in Data Pipeline, Reconciliation,  Market Data, or other Fintech applications Understanding of AWS services and infrastructure Experience with Docker or containerization Experience with agile development methodologies Experience with React Experience with caching Experience with data modeling Experience leading difficult technical projects that take multiple people and teams to complete Ability to handle multiple projects and prioritize effectively Excellent communication skills, both written and verbal Willingness to learn about cutting-edge technologies while cultivating expertise in a business domain/problem space An aptitude for problem-solving Ability to amplify the ideas of others Responsibility for delivering an excellent project that extends beyond coding Ability to adapt to a fast-paced and changing environment Compensation and Benefits  The typical starting salary range for new hires in this role is $174,000 - $220,000. Final compensation amounts are determined by multiple factors, including candidate experience and expertise, and may vary from the amount listed above.  As an employee at Ridgeline, you’ll have many opportunities for advancement in your career and can make a true impact on the product.  In addition to the base salary, 100% of Ridgeline employees can participate in our Company Stock Plan subject to the applicable Stock Option Agreement. We also offer rich benefits that reflect the kind of organization we want to be: one in which our employees feel valued and are inspired to bring their best selves to work. These include unlimited vacation, educational and wellness reimbursements, and $0 cost employee insurance plans. Please check out our  Careers page for a more comprehensive overview of our perks and benefits. About Ridgeline Ridgeline is the industry cloud platform for investment management. It was founded in 2017 by visionary entrepreneur Dave Duffield (co-founder of both PeopleSoft and Workday) to address the unique technology challenges of an industry in need of new thinking. We are building a modern platform in the public cloud, purpose-built for the investment management industry to empower businesses like never before.  Headquartered in Lake Tahoe with offices in Reno, Manhattan, and the Bay Area, Ridgeline is proud to have built a fast-growing, people-first company that has been recognized by Fast Company as a “Best Workplace for Innovators,” by LinkedIn as a “Top U.S. Startup,” and by The Software Report as a “Top 100 Software Company.” Ridgeline is proud to be a community-minded, discrimination-free equal opportunity workplace. Ridgeline processes the information you submit in connection with your application in accordance with the Ridgeline Applicant Privacy Statement . Please review the Ridgeline Applicant Privacy Statement in full to understand our privacy practices and contact us with any questions.  

Posted 30+ days ago

Staff Data Engineer-logo
GoFundMeSan Francisco, CA
Want to help us, help others? We’re hiring!  GoFundMe is the world’s most powerful community for good, dedicated to helping people help each other. By uniting individuals and nonprofits in one place, GoFundMe makes it easy and safe for people to ask for help and support causes—for themselves and each other. Together, our community has raised more than $40 billion since 2010. Join us! The GoFundMe team is searching for our next Staff Data Engineer to lead and own design strategy and implementation within data ingestion, data attribution, data security and data reliability and quality workstreams within our Data Platform team. We work with the latest and greatest data platforms and technologies including Snowflake, Databricks, mParticle, Kafka and Flink, amongst others. Your work will directly empower diverse teams in analytics, data science, and data-driven product development, fueling our mission to drive a culture of generosity worldwide. This position requires deep expertise in data ingestion, transformation, and orchestration across our diverse technologies, as well as a passion for data observability, pipeline reliability, and proactive monitoring. Candidates considered for this role will be located in the San Francisco, Bay Area. There will be an in-office requirement of 3x a week. The Job… Lead the design, development, and optimization of data ingestion pipelines, ensuring timely, scalable, and reliable data flows into the enterprise data warehouse (Snowflake). Define and implement best practices for data ingestion, transformation, governance, and observability, ensuring consistency, data quality, and compliance across multiple data sources. Develop and maintain data ingestion frameworks that support batch, streaming, and event-driven data pipelines. Implement and maintain data observability tools to monitor pipeline health, track data lineage, and detect anomalies before they impact downstream users. Design and enforce automated data quality checks, validation rules, and anomaly detection to ensure teams can rely on high-integrity data . Own and optimize ETL/ELT orchestration (Airflow, Prefect) and ensure efficient, cost-effective data processing. Proactively support the health and growth of data infrastructure , ensuring it's secure and adaptable to future needs. Partner with data engineering, software engineering, and platform teams to integrate data from transactional systems, streaming services, and third-party APIs. Provide technical mentorship to other engineers on data observability best practices, monitoring strategies, and pipeline reliability. Stay curious—research and advocate for new technologies that enhance data accessibility, freshness, and impact. You have… 6+ years of experience in data engineering, with a strong focus on data ingestion, ETL/ELT pipeline design, and large-scale data processing. Proven experience in designing and managing data ingestion frameworks for structured and unstructured data. Expertise in data observability and monitoring tools (Monte Carlo, Databand, Bigeye, or similar). Strong experience with batch and real-time data ingestion (Kafka, Kinesis, Spark Streaming, or equivalent). Proficiency in orchestration tools like Apache Airflow, Prefect, or Dagster. Strong understanding of data lineage, anomaly detection, and proactive issue resolution in data pipelines. Proficiency in SQL and Python for data processing and automation. Strong knowledge of API-based data integration and experience working with third-party data sources. Hands-on experience with Snowflake and best practices for data warehouse ingestion and management. Experience working with data governance, security best practices, and compliance standards. Ability to collaborate cross-functionally and communicate technical concepts to non-technical stakeholders. Nice to have… Experience with event tracking, behavioral analytics, and CDP data pipelines (GA, Heap, Segment, RudderStack, etc.). Hands-on experience with DBT for data transformation. Understanding of data science and machine learning pipelines and how ingestion supports these workflows. Why you’ll love it here... Make an Impact : Be part of a mission-driven organization making a positive difference in millions of lives every year. Innovative Environment : Work with a diverse, passionate, and talented team in a fast-paced, forward-thinking atmosphere. Collaborative Team : Join a fun and collaborative team that works hard and celebrates success together. Competitive Benefits : Enjoy competitive pay and comprehensive healthcare benefits. Holistic Support : Enjoy financial assistance for things like hybrid work, family planning, along with generous parental leave, flexible time-off policies, and mental health and wellness resources to support your overall well-being. Growth Opportunities : Participate in learning, development, and recognition programs to help you thrive and grow. Commitment to DEI : Contribute to diversity, equity, and inclusion through ongoing initiatives and employee resource groups. Community Engagement : Make a difference through our volunteering and Gives Back programs. We live by our core values: impatient to be great , find a way , earn trust every day , fueled by purpose . Be a part of something bigger with us! GoFundMe is proud to be an equal opportunity employer that actively pursues candidates of diverse backgrounds and experiences. We are committed to providing diversity, equity, and inclusion training to all employees, and we do not discriminate on the basis of race, color, religion, ethnicity, nationality or national origin, sex, sexual orientation, gender, gender identity or expression, pregnancy status, marital status, age, medical condition, mental or physical disability, or military or veteran status. The total annual salary for this full-time position is $181,000 - $271,000 + equity + benefits. Thesalary range was determined by role, level, and possible location across the US. Individual pay is determined by work location and additional factors including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range based on your location during the hiring process.  If you require a reasonable accommodation to complete a job application or a job interview or to otherwise participate in the hiring process, please contact us at accommodationrequests@gofundme.com .  Dedication to Diversity  GoFundMe and Classy are committed to leveraging Diversity, Equity, Inclusion, and Belonging to cultivate a culture that embraces and supports the unique identities, experiences, and perspectives of our people and customers. Our diversity recruiting priority is recognized under our first DEIB Driver: Opportunity Foster Diversity - we identify, recruit, and invest in top talent- ensure our people reflect the unique identities, experiences, and perspectives of the communities we serve and are all given the chance to grow. Global Data Privacy Notice for Job Candidates and Applicants: Depending on your location, the General Data Protection Regulation (GDPR) or certain US privacy laws may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here . By submitting your application, you are agreeing to our use and processing of your data as required.  Learn more about GoFundMe: We’re proud to partner with GoFundMe.org , an independent public charity, to extend the reach and impact of our generous community, while helping drive critical social change. You can learn more about GoFundMe.org’s activities and impact in their FY ‘24 annual report . Our annual “Year in Help” report reflects our community’s impact in advancing our mission of helping people help each other. For recent company news and announcements, visit our  Newsroom .   #LI-GT1

Posted today

P
Possible FinanceSeattle, Washington
We’re on a mission to help our customers and their communities unlock economic mobility for generations to come. Join the team that’s making our goal a reality. At Possible, we’re building a new type of consumer finance company. One that helps our customers stay out of debt rather than profit from them staying in it. As a Public Benefit Corporation, it is our mission and responsibility to help communities unlock economic mobility through affordable credit products crafted to improve financial health. Founded in 2017, our lead VCs are Canvas and Union Square Ventures. We have over 100,000 reviews on the App Store with a 4.8-star average rating. Since our founding, we have redefined how people approach small-dollar loans—delivering over $1 billion in funding to more than 1 million customers, issuing over 4 million loans, and saving our customers more than $500 million. We are seeking a hardworking Data Engineer to join our ambitious data team. As a Data Engineer at Possible Financial Inc., you will craft, build, and maintain world-class data models and pipelines. Collaborate cross-functionally to drive data-driven decision-making and successfully implement innovative solutions. Are you interested in architecting flawless data infrastructure while making a positive societal impact? Have you always wanted your work to have a mission to help underserved communities? Join us in Seattle, WA, and work on high-impact, large-scale data engineering projects. We apply AWS, Databricks, dbt, Airflow, and Terraform. We are looking for a self-starter with outstanding abilities to deliver on time and a passion for data to solve business problems. Responsibilities Design and implement scalable data pipelines and ETL processes Build and optimize data models for analytics and machine learning applications Collaborate with finance, marketing, and other teams to support their data needs Develop and maintain cloud-based data infrastructure (AWS + Databricks) Implement data quality measures and monitoring systems Evaluate and implement new data technologies and tools Requirements 3+ years of experience in data engineering or a similar role Knowledge of SQL, Python, and Spark Strong understanding of data modeling and architecture principles Proficiency in cloud platforms like AWS, GCP, or Azure and IaC tools such as Terraform Experience with modern data warehousing solutions Knowledge of at least one orchestration tool (e.g., Airflow, Dagster, Prefect, AWS Step Functions) Experience with dbt or similar transformation tools Background in implementing data governance and security practices Preferred Qualifications Experience with real-time data processing systems such as Kafka, Kinesis, etc. Knowledge of machine learning operations (MLOps) Familiarity with monitoring and observability tools like DataDog, Grafana, Great Expectations, and Elementary With the backing of our venture investors— Union Square Ventures, Canvas Ventures, Euclidean Capital, and Unlock Venture Partners — a dedicated following of hundreds of thousands of customers, and an extraordinary team, we are unwavering in our fight for financial fairness. As one of only a few FinTech Public Benefit Corporations, we’ve baked our dual dedication to building a profitable and socially impactful company into our charter; we only succeed when our customers do too . Give us a shout if you’d like to help us ship financial products that protect consumers from predatory lending practices and promote economic health. This is a Hybrid position. We work in the office three days a week, and our office is centrally located in downtown Seattle. The compensation range for this role is $142,000 to $150,000. We also offer significant stock options, comprehensive benefits, a bonus plan, commuter benefits, and an excellent office space with complimentary drinks and food options. Possible Finance is dedicated to financial fairness and community empowerment. We welcome diverse perspectives and experiences to help us achieve our mission of unlocking economic mobility for generations to come. Learn more about us as a Public Benefit Company .

Posted today

Data Scientist, Business-logo
OpenAISan Francisco, California
About the Team The Business Data Science team uses data and analytics to optimize business performance, drive growth, and foster meaningful partnerships, with the goal of ensuring the sustained and impactful expansion of OpenAI's initiatives to maximize the benefits of AGI for all of humanity. We partner with Sales (GTM), Marketing, Partnerships, Support, Finance, Product, and Growth. About the Role As a member of our Business Data Science team, you will help build a data-driven culture around insight generation, decision making, and strategy at OpenAI. This role is focused on driving customer success within our business products (ChatGPT Team, ChatGPT Enterprise, and API). You will work on projects such as identifying opportunities for interventions within a customer lifecycle to drive activation & onboarding, identifying target audiences for new feature launches, and measuring the efficacy of emails, events, and other interventions to drive ongoing engagement with our products. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. In this role, you will: Embed with our Customer Success organization as a trusted partner, uncovering new ways to drive customer adoption and engagement of our business products. Establish key metrics, run experiments, and perform analysis to help us understand the incrementality of our efforts to drive adoption/engagement. Proactively surface insights and opportunities to drive engagement and growth. Build tools and systems for stakeholders to self-serve routine data and insights freeing up time to work on more leveraged analyses. Become an expert in OpenAI’s data and systems. Through partnership with Data Eng, Finance and other business teams, you will self-serve all the underlying data for our business and derive insights from them. Partner with other data scientists across the company to share knowledge and continually synthesizing learnings across the organization You might thrive in this role if you have: At least 7+ years of experience in Data Science roles within dynamic, outcome-driven organizations. Expertise in statistics and causal inference, applied in both experimentation and observational causal inference studies. Proficiency in quantitative programming languages, such as Python and R. Expertise in SQL, with extensive experience extracting large datasets and designing ETL workflows. Experience using business intelligence tools, such as Mode, Tableau, and Looker. Strategic and impact-driven mindset, capable of translating complex business problems into actionable frameworks. Ability to build relationships with diverse stakeholders and cultivate strong partnerships. Strong communication skills, including the ability to bridge technical and non-technical stakeholders and collaborate across various functions to ensure business impact. Ability to craft clear data stories using decks, memos, and dashboards to drive decision-making at every level. Best-in-class attention to detail and unwavering commitment to accuracy. Proven track record in solving problems within Finance, Marketing, Partnerships, Sales, Support, or other GTM areas. . About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement . Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form . No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link . OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Posted today

R
RippleMatch Opportunities Chicago, IL
This role is with RippleMatch's partner companies. RippleMatch partners with hundreds of companies looking to hire top talent. About RippleMatch RippleMatch is your AI-powered job matchmaker. Our platform brings opportunities directly to you by matching you with top employers and jobs you are qualified for. Tell us about your strengths and goals - we'll get you interviews! Leading employers leverage RippleMatch to build high-performing teams and Gen Z job seekers across the country trust RippleMatch to launch and grow their careers. Requirements for the role: Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Economics, or a related field. Prior work experience or internships involving data analysis or related fields is a plus. Proficiency in statistical analysis and the use of various data analysis tools and software. Strong skills in programming languages relevant to data analysis, such as Python, R, SQL, or similar. Experience with data visualization tools and software (e.g., Tableau, Power BI). Ability to clean, manipulate, and analyze large datasets to derive actionable insights. Excellent problem-solving skills and attention to detail. Strong organizational and project management abilities, capable of managing multiple tasks simultaneously. Effective communication and interpersonal skills, with the ability to present complex data in a clear and concise manner to non-technical audiences. A proactive approach to learning and applying new analytics techniques and tools.

Posted 1 week ago

R
RippleMatch Opportunities Boston, MA
This role is with RippleMatch's partner companies. RippleMatch partners with hundreds of companies looking to hire top talent. About RippleMatch RippleMatch is your AI-powered job matchmaker. Our platform brings opportunities directly to you by matching you with top employers and jobs you are qualified for. Tell us about your strengths and goals - we'll get you interviews! Leading employers leverage RippleMatch to build high-performing teams and Gen Z job seekers across the country trust RippleMatch to launch and grow their careers. Requirements for the role: Currently pursuing a Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Economics, or a related field. Strong foundational knowledge in statistical analysis, data modeling, and data mining techniques. Proficiency in data analysis tools and programming languages such as Python, R, SQL, or similar. Experience with data visualization tools and software (e.g., Tableau, Power BI, or similar). Ability to interpret complex data sets and provide actionable insights. Excellent problem-solving skills and attention to detail. Effective organizational and time management skills, with the ability to prioritize tasks and manage multiple projects simultaneously. Strong communication and interpersonal skills, with the ability to collaborate effectively with team members. Eagerness to learn and apply new techniques and tools in the field of data analysis.

Posted 2 weeks ago

R
RippleMatch Opportunities Atlanta, GA
This role is with RippleMatch's partner companies. RippleMatch partners with hundreds of companies looking to hire top talent. About RippleMatch RippleMatch is your AI-powered job matchmaker. Our platform brings opportunities directly to you by matching you with top employers and jobs you are qualified for. Tell us about your strengths and goals - we'll get you interviews! Leading employers leverage RippleMatch to build high-performing teams and Gen Z job seekers across the country trust RippleMatch to launch and grow their careers. Requirements for the role: Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Economics, or a related field. Prior work experience or internships involving data analysis or related fields is a plus. Proficiency in statistical analysis and the use of various data analysis tools and software. Strong skills in programming languages relevant to data analysis, such as Python, R, SQL, or similar. Experience with data visualization tools and software (e.g., Tableau, Power BI). Ability to clean, manipulate, and analyze large datasets to derive actionable insights. Excellent problem-solving skills and attention to detail. Strong organizational and project management abilities, capable of managing multiple tasks simultaneously. Effective communication and interpersonal skills, with the ability to present complex data in a clear and concise manner to non-technical audiences. A proactive approach to learning and applying new analytics techniques and tools.

Posted 1 week ago

R
RippleMatch Opportunities Miami, FL
This role is with RippleMatch's partner companies. RippleMatch partners with hundreds of companies looking to hire top talent. About RippleMatch RippleMatch is your AI-powered job matchmaker. Our platform brings opportunities directly to you by matching you with top employers and jobs you are qualified for. Tell us about your strengths and goals - we'll get you interviews! Leading employers leverage RippleMatch to build high-performing teams and Gen Z job seekers across the country trust RippleMatch to launch and grow their careers. Requirements for the role: Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Economics, or a related field. Prior work experience or internships involving data analysis or related fields is a plus. Proficiency in statistical analysis and the use of various data analysis tools and software. Strong skills in programming languages relevant to data analysis, such as Python, R, SQL, or similar. Experience with data visualization tools and software (e.g., Tableau, Power BI). Ability to clean, manipulate, and analyze large datasets to derive actionable insights. Excellent problem-solving skills and attention to detail. Strong organizational and project management abilities, capable of managing multiple tasks simultaneously. Effective communication and interpersonal skills, with the ability to present complex data in a clear and concise manner to non-technical audiences. A proactive approach to learning and applying new analytics techniques and tools.

Posted 1 week ago

T
The Allen Institute for AISeattle, WA
Persons in these roles are welcome to work remotely from any state in the US. Our base salary range is $146,880 - $220,320, and in addition we have generous bonus plans to provide a competitive compensation package. Who You Are: The Allen Institute for AI (Ai2) is hiring a Data Engineer to help integrate a large U.S. patent corpus into the Semantic Scholar platform. This NSF-funded role focuses on high-impact data engineering: linking patent and academic research data, resolving citations, disambiguating inventors and authors, applying topic models, and extending data products and APIs. You’ll work in a high-performing engineering environment and own full-stack data tasks including building pipelines, integrating or training practical ML models, and deploying production services. This is not a research role, but you should be confident implementing ML-driven solutions when off-the-shelf tools don’t cut it. This is a fixed term position scheduled for 2 years with the possibility of renewal. Who We Are:  The Semantic Scholar team builds open, production-grade systems that power scientific discovery and large-scale AI research. We focus on creating high-quality structured datasets, integrating diverse content types, and enabling downstream applications across search, citation analysis, and model training. The team combines strong engineering practices with close collaboration across Ai2’s product and research orgs to deliver tools and infrastructure used by millions of researchers and developers worldwide. Your Next Challenge: Build scalable data pipelines (Airflow) for citation resolution and corpus integration Develop and deploy lightweight ML models for inventor disambiguation and author linking Train or adapt a topic model to classify patents using titles, abstracts, claims, and specs Extend REST APIs to expose linked metadata and topic classifications Contribute to dashboards and tools for evaluating data quality and model precision Collaborate with Ai2 engineers to ensure maintainability, test coverage, and robust deployment Produce reliable, well-documented code and contribute technical designs that support long-term maintainability What You’ll Need: Required : Bachelor's degree and 8+ years of technical experience; relevant experience may substitute for education. Strong Python engineering skills, especially for building and maintaining data pipelines Experience with SQL and schema design in production settings (PostgreSQL preferred) Familiarity with common ML workflows (training classifiers, tuning models, and deploying for inference), particularly for large-scale or ambiguous structured datasets Comfortable working with structured datasets (XML/JSON/Parquet) and writing ETL code Experience with workflow orchestration tools (Airflow or similar) and cloud infrastructure (e.g. AWS, S3, Docker) Strong communicator and a strong sense of ownership for results Preferred : Experience with author disambiguation, entity resolution, or record linkage problems Experience applying vector-based similarity or topic modeling techniques to real-world corpora at scale Exposure to citation networks or scholarly data systems (e.g., arXiv, OpenAlex, USPTO) Comfort building internal APIs and dashboards to support ML and data quality review Physical Demands and Work Environment: The physical demands described here are representative of those that must be met by a team member to successfully perform the essential functions of this position. Reasonable accommodations may be made to enable individuals with disabilities to perform the functions. Must be able to remain in a stationary position for long periods of time.  The ability to communicate information and ideas so others will understand. Must be able to exchange accurate information in these situations.  The ability to observe details at close range. Can work under deadlines. A Little More About Ai2: Ai2 is a Seattle based non-profit AI research institute founded in 2014 by the late Paul Allen. Our mission is building breakthrough AI to solve the world’s biggest problems. We develop foundational AI research and innovation to deliver real-world impact through large-scale open models, data, robotics, conservation, and beyond. In addition to Ai2’s core mission, we also aim to contribute to humanity through our treatment of each member of the Ai2 Team. Some highlights are: We are a learning organization – because everything Ai2 does is ground-breaking, we are learning every day. Similarly, through weekly Ai2 Academy lectures, a wide variety of world-class AI experts as guest speakers, and our commitment to your personal on-going education, Ai2 is a place where you will have opportunities to continue learning alongside your coworkers.  We value diversity - We seek to hire, support, and promote people from all genders, ethnicities, and all levels of experience regardless of age. We particularly encourage applications from women, non-binary individuals, people of color, members of the LGBTQA+ community, and people with disabilities of any kind.  We value inclusion - We understand the value that people's individual experiences and perspectives can bring to an organization, and we are building a culture in which all voices are heard, respected and considered. We emphasize a healthy work/life balance – we believe our team members are happiest and most productive when their work/life balance is optimized. While we value powerful research results which drive our mission forward, we also value dinner with family, weekend time, and vacation time. We offer generous paid vacation and sick leave as well as family leave. We are collaborative and transparent – we consider ourselves a team, all moving with a common purpose. We are quick to cheer our successes, and even quicker to share and jointly problem solve our failures. We are in Seattle – and our office is on the water! We have mountains, we have lakes, we have four seasons, we bike to work, we have a vibrant theater scene, and we have so much else. We even have kayaks for you to paddle right outside our front door. We welcome interest from applicants from outside of the United States. We are friendly – chances are you will like every one of the 200+ (and growing) people who work here. We do.  Ai2 is proud to be an Equal Opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. You may view the related Know Your Rights compliance poster and the Pay Transparency Nondiscrimination Provision by clicking on their corresponding links.  This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S. If E-Verify cannot confirm that you are authorized to work, this employer is required to give you written instructions and an opportunity to contact the Department of Homeland Security (DHS) or Social Security Administration (SSA) so you can begin to resolve the issue before the employer can take any action against you, including terminating your employment. Employers can only use E-Verify once you have accepted a job offer and completed the Form I-9. We are committed to providing reasonable accommodations to employees and applicants with disabilities to the full extent required by the Americans with Disabilities Act (ADA). If you feel you need a reasonable accommodation pursuant to the ADA, you are encouraged to contact us at recruiting@allenai.org. Benefits: Team members and their families are covered by medical, dental, vision, and an employee assistance program. Team members are able to enroll in our health savings account plan, our healthcare reimbursement arrangement plan, and our health care and dependent care flexible spending account plans. Team members are able to enroll in our company’s 401k plan. Team members will receive $125 per month to assist with commuting or internet expenses and will also receive $200 per month for fitness and wellbeing expenses. Team members will also receive up to ten sick days per year, up to seven personal days per year, up to 20 vacation days per year and twelve paid holidays throughout the calendar year. Team members will be able to receive annual bonuses.         Note: This job description in no way states or implies that these are the only duties to be performed by the team members(s) of this position. Team members will be required to follow any other job-related instructions and to perform any other job-related duties requested by any person authorized to give instructions or assignments. All duties and responsibilities are essential functions and requirements and are subject to possible modification to reasonably accommodate individuals with disabilities. To perform this job successfully, the team member(s) will possess the skills, aptitudes, and abilities to perform each duty proficiently. Some requirements may exclude individuals who pose a direct threat or significant risk to the health or safety of themselves or others. The requirements listed in this document are the minimum levels of knowledge, skills, or abilities. This document does not create an employment contract, implied or otherwise, other than an at will relationship.

Posted today

F
Foursquare New York, NY
About Foursquare  Foursquare is the leading independent location technology and data cloud platform dedicated to building meaningful bridges between digital spaces and physical places. Our proprietary technology unlocks the most accurate, trustworthy location data in the world, empowering businesses to answer key questions, uncover hidden insights, improve customer experiences, and achieve better business outcomes. A pioneer of the geo-location space, Foursquare’s location tech stack is being utilized by the world’s largest enterprises and most recognizable brands. About the Team:  As a data engineer on the Places team, you will contribute to the platform services and pipelines that facilitate large-scale data ingestion and governance. You will ship software with high visibility and of strategic importance to Foursquare, directly impacting revenue and the experience of our customers and Open Source community. You will focus on implementing and productionization of our ML models, working closely with our Data Science team to improve model performance and scalability. The Places team owns all components of our places dataset: from ingestion and data expansion, to delivery mechanisms like our APIs. We own and iterate on the core building blocks of our customer and Open Source Places Product offering, which lays the foundation for Fourquare’s other products and services. In this role you’ll: Influence key decisions on architecture and implementation of scalable, automated data processing workflows Build big data processing pipelines using Spark and Airflow Focus on performance, throughput, and latency, and drive these throughout our architecture Write test automation, conduct code reviews, and take end-to-end ownership of deployments to production Write, deploy, and monitor services for data access by systems across our infrastructure Participate in on-call rotation duties Act as a force multiplier, conducting code reviews, and coordinating cross-team efforts Implement and advocate for best practices in testing, code quality, and CI/CD pipelines What you’ll need: BS/BA in a technical field such as computer science or equivalent experience. 3+ years of experience in software development, working with production-level code. Proficiency in one or more of the programming languages we use: Python, Java or Scala Excellent communication skills, including the ability to identify and communicate data-driven insights. Self-driven and feel comfortable learning without much hand-holding Eagerness to learn new technologies Your own unique talents! If you don’t meet 100% of the qualifications outlined above, we encourage and welcome you to still apply! Nice to have: Experience with relational or document-oriented database systems, such as Postgres and MongoDB and experience writing SQL queries. Experience with cloud infrastructure services, such as AWS(S3, EMR, EC2, Glue, Athena, SQS, SNS) or GCP Experience with data processing technologies and tools, such as Spark, Hadoop(HDFS, Hive, MapReduce), Athena, Airflow, Luigi Our Tech Stack: Languages: Java, Scala, Python Tools for pipeline orchestration: Airflow, Luigi Data Processing Frameworks: Spark, MapReduce, Scalding At Foursquare, we are committed to providing competitive pay and benefits that are in line with industry and market standards.   Actual compensation packages are based on a wide array of factors unique to each candidate including but not limited to skill set, years & depth of experience, and specific office location. The annual total cash compensation range is _____________   however actual salaries can vary based on a candidate’s qualifications, skills and competencies, as well as location.  Salary is just one component of Foursquare’s total compensation package, which includes restricted stock units, multiple health insurance options, and a wide range of benefits! Benefits and Perks: Flexible PTO - rest and recharge when you need it! Industry Leading Healthcare - comprehensive and competitive health, vision, dental, life insurance Savings and Investments - 401(k) with company match Equipment Setup - you will receive all necessary hardware for your job function Family Planning and Fertility Programs - programs via Carrot Hybrid Work Schedule  for in-person collaboration on Tuesdays, Wednesdays, and Thursdays. Things to know… Foursquare is proud to foster an inclusive environment that is free from discrimination. We strongly believe in order to build the best products, we need a diversity of perspectives and backgrounds. This leads to a more delightful experience for our users and team members. We value listening to every voice and we encourage everyone to come be a part of building a company and products we love. Foursquare is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected Veteran status, or any other characteristic protected by law. Foursquare Privacy Policy #LI-HYBRID #LI-MM1

Posted 3 weeks ago

Senior Software Engineer, Discovery - Data-logo
HarmonicNew York, NY
About us Harmonic is the startup discovery engine. It pains us to see great startup opportunities consistently go undiscovered. So, we dedicated ourselves to mapping out the startup landscape and building the tools that ensure the most promising founders get found and funded. The world's largest and most prolific venture capital firms (as well as the up-and-comers you haven’t heard of yet) rely on us to find and invest in the next Google, AirBnB, Uber, Stripe, and Anduril. We play a crucial part in ensuring hundreds of billions of dollars get routed efficiently and that the innovations the world would most benefit from materialize. We are on pace to double over the next twelve months and already power thousands of investors' workflow. Backed by $30M from investors like Craft, Floodgate, and Sozo Ventures, we want to power the entire investment workflow from discovery to term sheet. If you resonate with our values and want to fundamentally evolve how venture capital markets work, come join us. About Discovery The Discovery team sits at the core of Harmonic’s product and data engine. We ingest data from hundreds of fragmented and unstructured sources - ranging from APIs and raw HTML to legal filings and documents - and turn them into structured insights on startups, people, and investors. We’re responsible not just for the quality, coverage, and freshness of this data, but also for how it’s served and surfaced to users across Harmonic. That means building the pipelines that reconcile data at massive scale and powering the full-stack search experience that helps investors discover breakout companies before anyone else. From advanced LLM-based extraction to scalable search indexing and responsive APIs, our work directly fuels Harmonic’s most critical product surfaces - from grid-based search to AI research copilots. To learn more about the team: Explore  Working with Sang  and  Working with Miguel Check out your teammates Jimmy , Akshaya , Apoorva , Gavin , TJ , and Joe . Explore the  Team Page . We’re a mix of ex-founders and seasoned engineers from top engineering institutions like Google, LinkedIn, Microsoft and Meta. The role In this role, you will: Architect and implement large-scale Python-based data pipelines that power Harmonic’s structured understanding of startups, people, and investments. Continue evolving our data extraction and reconciliation processes with dynamic agents Own data quality benchmarks across coverage, freshness, and accuracy, and identify new data sources and ingestion techniques that optimize for performance and quality. Work across the backend stack (databases, search indices and graphql APIs) to bring data to life in the product. Partner closely with AI/ML, Data Ops, and Product to hit quality and performance OKRs and ship new data products that directly drive revenue and user value. Background we’re looking for: 6+ years of experience as a software engineer, with flexibility for exceptional candidates. Proven track record of building and maintaining large-scale data infrastructure, ideally in high-growth or startup environments. Strong product sense: the ability to reason about what data is valuable to extract, how it should be structured and served, and how end users will interact with it. Systems-level thinking: you’ve designed data models, chosen indexing strategies, and built serving paths optimized for both performance and user experience. Experience we’d be particularly excited about: Background as a data engineer or full-stack engineer who’s spent time in the weeds wrangling raw, messy, real-world data into structured, useful, and reliable outputs. Experience with LLM-powered data extraction, search systems, or agentic workflows. Exposure to venture capital, private markets, or company intelligence data. Pay $180k-$220k Salary + Equity depending on the level Our stack The Process Here’s our interview process: Recruiter Screening: 20-30 mins Technical Screening (System Design): 45 mins Technical Screening (Coding): 2 hrs & behavioral interview with Head of Eng: 45 mins  Chat with our CEO: 30 mins References Benefits 🩺 Top of the line health, dental and vision insurance, with 100% premium covered 📈 401k matching 🍜 Free lunch in office 🍣 Monthly team dinner (we have a lot of foodies) for each office 🚂 Commuter benefits

Posted 1 week ago

Senior Staff Data Scientist, Credit Card-logo
SoFiSan Francisco, CA
Employee Applicant Privacy Notice Who we are: Shape a brighter financial future with us. Together with our members, we’re changing the way people think about and interact with personal finance. We’re a next-generation financial services company and national bank using innovative, mobile-first technology to help our millions of members reach their goals. The industry is going through an unprecedented transformation, and we’re at the forefront. We’re proud to come to work every day knowing that what we do has a direct impact on people’s lives, with our core values guiding us every step of the way. Join us to invest in yourself, your career, and the financial world. The Role We are seeking a Senior Staff Data Scientist who will help shape our Credit Card business strategy and product development decisions through data-driven insights.  The Senior Staff Data Scientist will be working closely with cross-functional partners, including Credit Card business leaders, Product & Finance partners, to build and grow SoFi Credit Card into a top choice for consumers to “Get Your Money Right”. This is an exciting role for someone to make a direct impact on new product strategy and revenue of SoFi. Success in this role hinges on your technical aptitude, quantitative abilities, and business acumen: you know how to plow through data with SQL/Python/Tableau, surface insights using math/statistics/ML techniques, and measure the business impact using efficiency/conversion/profit metrics. You treat stakeholders as a partnership – you are there at each step of the way and you know that we only succeed if we succeed together.    What you’ll do: Work with amazing product and business managers to identify strategic opportunities, measure KPIs to craft compelling stories, make data-driven recommendations, and drive informed actions.  Analyze segment performance at the portfolio level and by vintage to drive insights and optimization Help drive customer engagement strategies including early stage engagement, balance transfers, spend programs, and retention Help manage and analyze existing cardholder portfolio covering spend engagement, balance stimulation, and good payment behaviors Craft, analyze, and present customer behavior metrics, such as funnel conversion, user churn, features engagement, product LTV, and cross-sell metrics. Work with product and marketing partners to design tests to improve these metrics.  Own end-to-end product analytics workflow including formulating success metrics, socializing them across the organization, and creating pipelines & dashboards/reports; Run significance tests and continuously discover trends in customer behavior to inform decision making with a high level of confidence; Present analyses to Credit Card and Finance business leaders. What you’ll need: M.B.A, B.S. or M.S. in Statistics, Economics, Engineering, Computer Science, Mathematics or a related quantitative field is required for this position 8+ years of experience working in an analytics, business strategy, or related role in the credit card industry Strong programming skills in SQL, Python and proficiency in Tableau Experience in the Credit Card industry, and deep understanding of credit card credit bureau data Strong understanding of Credit Card PnL (profit and loss) drivers Experience with product analytics, experimentation and hypothesis testing Proven track record of end-to-end experience in building analytical frameworks to inform business strategy Ability to work in a dynamic, cross-functional environment, with a strong attention to detail Effective communication and presentation skills and ability to explain complex analyses in simple terms to business leaders Exceptional problem-solving skills Strong relationship building and collaborative skills Compensation and Benefits The base pay range for this role is listed below. Final base pay offer will be determined based on individual factors such as the candidate’s experience, skills, and location.    To view all of our comprehensive and competitive benefits, visit our  Benefits at SoFi   page! SoFi provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion (including religious dress and grooming practices), sex (including pregnancy, childbirth and related medical conditions, breastfeeding, and conditions related to breastfeeding), gender, gender identity, gender expression, national origin, ancestry, age (40 or over), physical or medical disability, medical condition, marital status, registered domestic partner status, sexual orientation, genetic information, military and/or veteran status, or any other basis prohibited by applicable state or federal law. The Company hires the best qualified candidate for the job, without regard to protected characteristics. Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records. New York applicants: Notice of Employee Rights SoFi is committed to embracing diversity. As part of this commitment, SoFi offers reasonable accommodations to candidates with physical or mental disabilities. If you need accommodations to participate in the job application or interview process, please let your recruiter know or email accommodations@sofi.com. Due to insurance coverage issues, we are unable to accommodate remote work from Hawaii or Alaska at this time. Internal Employees If you are a current employee, do not apply here - please navigate to our Internal Job Board in Greenhouse to apply to our open roles.

Posted 2 weeks ago

Staff Data Engineer-logo
SoFiSan Francisco HQ, CA
Employee Applicant Privacy Notice Who we are: Shape a brighter financial future with us. Together with our members, we’re changing the way people think about and interact with personal finance. We’re a next-generation financial services company and national bank using innovative, mobile-first technology to help our millions of members reach their goals. The industry is going through an unprecedented transformation, and we’re at the forefront. We’re proud to come to work every day knowing that what we do has a direct impact on people’s lives, with our core values guiding us every step of the way. Join us to invest in yourself, your career, and the financial world. SoFi is seeking an experienced and motivated Staff Data Engineer to be the technical leader of our Data Engineering group within the Data Enablement division. The mission of the Data Enablement division is to activate data throughout SoFi, enabling the creation of personalized and delightful experiences for our members.  As a technical leader, you will help lead the vision and strategy to build foundational and critical data products which are highly leveraged across SoFi for off-line analytical, reporting, and machine learning use-cases as well as more critical online use cases. Our goal is to empower all teams at SoFi to make data driven decisions and effectively measure their results by providing high quality, high availability data, and democratized data access through self-service tools.  Role :  A talented, enthusiastic, and detail-oriented experienced Data Engineer who knows how to take on big data challenges in an agile way. This includes big data design and analysis, data modeling, and development, deployment, and operations of big data pipelines.  Leads development of some of the most critical data pipelines and datasets and expands self-service data knowledge and capabilities. Build scalable foundational data models that can be highly leveraged across SoFi for analytical, reporting, and machine learning. This role requires you to live at the cross section of data and engineering.  You have a deep understanding of data, analytical techniques, and how to connect insights to the business, and you have practical experience in insisting on highest standards on operations in ETL and big data pipelines.  What you’ll do  Provide technical leadership and strategic guidance to the data engineering team.  Design and develop robust data architectures and data pipelines to support data ingestion, processing, storage, and retrieval.  Evaluate and select appropriate technologies, frameworks, and tools to build scalable and reliable data infrastructure.  Optimize data engineering systems and processes to handle large-scale data sets efficiently.  Design solutions that can scale horizontally and vertically.  Collaborate with cross-functional teams, such as data scientists, software engineers, and business stakeholders, to understand data requirements, influence best practices upstream and down, and deliver solutions that meet business needs.  Effectively communicate complex technical concepts and trade offs to non-technical stakeholders and senior management verbally and in well written technical documents.  Optimize data engineering systems and processes to handle large-scale data sets efficiently.  Enforce data governance policies and practices to maintain data integrity, security, and compliance with relevant regulations.  Collaborate with data governance and security teams to implement robust data protection mechanisms and access controls.  Provide mentorship and guidance to the data engineering team, fostering a culture of continuous learning, innovation, and excellence.  Contribute to hiring and training efforts to build a skilled and motivated data engineering workforce. Be part of an on call support rotation to support the EDW What You'll Need:  A bachelor’s degree in computer science, Data Science, Engineering, or a related field.  Over 8 years of experience in data engineering and analytics, with a proven track record of successfully building data teams.  Proficiency in data engineering tech stack; Snowflake / Python / SQL / GitLab / AWS / Airflow.  Proficiency in relational database platforms and cloud database platforms such as Snowflake, Redshift, or GCP Strong in Python and/or another data centric language.   Thorough knowledge and passion around data modeling, database design, data architecture principles, and data operations.  Strong analytical and problem-solving abilities, with the capability to simplify complex issues into actionable plans.  Experience in a highly regulated and governed sector, though the Fintech industry is advantageous. Compensation and Benefits The base pay range for this role is listed below. Final base pay offer will be determined based on individual factors such as the candidate’s experience, skills, and location.    To view all of our comprehensive and competitive benefits, visit our  Benefits at SoFi   page! SoFi provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion (including religious dress and grooming practices), sex (including pregnancy, childbirth and related medical conditions, breastfeeding, and conditions related to breastfeeding), gender, gender identity, gender expression, national origin, ancestry, age (40 or over), physical or medical disability, medical condition, marital status, registered domestic partner status, sexual orientation, genetic information, military and/or veteran status, or any other basis prohibited by applicable state or federal law. The Company hires the best qualified candidate for the job, without regard to protected characteristics. Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records. New York applicants: Notice of Employee Rights SoFi is committed to embracing diversity. As part of this commitment, SoFi offers reasonable accommodations to candidates with physical or mental disabilities. If you need accommodations to participate in the job application or interview process, please let your recruiter know or email accommodations@sofi.com. Due to insurance coverage issues, we are unable to accommodate remote work from Hawaii or Alaska at this time. Internal Employees If you are a current employee, do not apply here - please navigate to our Internal Job Board in Greenhouse to apply to our open roles.

Posted 1 week ago

Senior Manager, Data Engineering-logo
SoFiFrisco, TX
Employee Applicant Privacy Notice Who we are: Shape a brighter financial future with us. Together with our members, we’re changing the way people think about and interact with personal finance. We’re a next-generation financial services company and national bank using innovative, mobile-first technology to help our millions of members reach their goals. The industry is going through an unprecedented transformation, and we’re at the forefront. We’re proud to come to work every day knowing that what we do has a direct impact on people’s lives, with our core values guiding us every step of the way. Join us to invest in yourself, your career, and the financial world. Role: We are currently seeking an experienced and highly motivated Senior Manager of Data Engineering to lead our Financial Services Data group. This includes SoFi’s Credit Card, Checking and Savings, Invest, as well as other products. The mission of this team is to produce exceptional data models, analytics, and reporting that enables our products to delight our members everyday.  In this role, your primary responsibilities will include developing and executing the data engineering strategy, driving data architecture initiatives, and overseeing the design and implementation of scalable data products. Collaboration with cross-functional teams will be essential to ensure data quality, integrity, and availability while promoting best practices for data management. This leadership position demands a strong technical background in data engineering, exceptional management skills, and the ability to thrive in a fast-paced, results-oriented environment. As a leader of leaders, you will play a crucial role in guiding and mentoring your team, fostering their professional growth, and ensuring they have the necessary resources and support to effectively lead their respective teams.   What you’ll do Lead and manage a team of data engineers, providing mentorship, guidance, and support. Establish clear goals and expectations for the team, ensuring alignment with the organization's objectives. Develop and implement the data engineering strategy in line with SoFi's overall data strategy. Keep abreast of industry trends and emerging technologies in data engineering, assessing their potential for adoption. Design, construct, and maintain scalable and dependable data infrastructure and products, encompassing data pipelines, ETL processes, and data warehouses. Safeguard data integrity, quality, and security across all data engineering processes. Enhance the performance and efficiency of data infrastructure and systems. Collaborate with data scientists, analysts, and software engineers to comprehend their data requirements and devise appropriate data solutions. Collaborate closely with stakeholders from diverse business units to define and implement data engineering solutions that cater to their needs. Communicate effectively with both technical and non-technical stakeholders, effectively translating complex concepts into clear and actionable insights. Plan, oversee, and execute data engineering projects, ensuring timely delivery within allocated budgets. Define project scope, objectives, deliverables, and timelines, while appropriately allocating resources.   What you’ll need A bachelor's degree in Computer Science, Data Science, Engineering, or a related field; a master's degree is advantageous. Over 8 years of experience in data engineering and analytics, with a proven track record of successfully building data teams. At least 2 years of experience in managing managers. Proficiency in data engineering technologies such as SQL, Python, Snowflake, Airflow, Data Warehousing, and Cloud Platforms. Thorough knowledge of data modeling, database design, data architecture principles, and data operations. Exceptional leadership and team management skills, with a demonstrated ability to inspire and develop high-performing teams. Strong analytical and problem-solving abilities, with the capability to simplify complex issues into actionable plans. Excellent communication skills, both written and verbal, enabling the clear presentation of complex information to technical and non-technical audiences. Experience in the Fintech industry is advantageous. Compensation and Benefits The base pay range for this role is listed below. Final base pay offer will be determined based on individual factors such as the candidate’s experience, skills, and location.    To view all of our comprehensive and competitive benefits, visit our  Benefits at SoFi   page! SoFi provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion (including religious dress and grooming practices), sex (including pregnancy, childbirth and related medical conditions, breastfeeding, and conditions related to breastfeeding), gender, gender identity, gender expression, national origin, ancestry, age (40 or over), physical or medical disability, medical condition, marital status, registered domestic partner status, sexual orientation, genetic information, military and/or veteran status, or any other basis prohibited by applicable state or federal law. The Company hires the best qualified candidate for the job, without regard to protected characteristics. Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records. New York applicants: Notice of Employee Rights SoFi is committed to embracing diversity. As part of this commitment, SoFi offers reasonable accommodations to candidates with physical or mental disabilities. If you need accommodations to participate in the job application or interview process, please let your recruiter know or email accommodations@sofi.com. Due to insurance coverage issues, we are unable to accommodate remote work from Hawaii or Alaska at this time. Internal Employees If you are a current employee, do not apply here - please navigate to our Internal Job Board in Greenhouse to apply to our open roles.

Posted 30+ days ago

The Weather Company logo
Product Manager - Data API
The Weather CompanyAndover, Massachusetts

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

About The Weather Company:

The Weather Company is the world’s leading weather provider, helping people and businesses make more informed decisions and take action in the face of weather. Together with advanced technology and AI, The Weather Company’s high-volume weather data, insights, advertising, and media solutions across the open web help people, businesses, and brands around the world prepare for and harness the power of weather in a scalable, privacy-forward way. The world’s most accurate forecaster globally, the company reaches hundreds of enterprise clients and more than 360 million monthly active users via its digital properties from The Weather Channel (weather.com) and Weather Underground (wunderground.com).

Job brief:

The Weather Company is seeking a strategic, data-driven, and customer-obsessed Enterprise API Product Manager to lead and scale our API portfolio. The weather API portfolio includes over a hundred industry-agnostic as well as industry-specific weather APIs that are sold across a growing number of sales channels to customers around the globe.

This role will drive the strategy, roadmap, and execution of our Weather API product, ensuring it meets the needs of developers, businesses, and end-users. The ideal candidate will have a strong background in product management, API development, and building intuitive self-serve platforms, with a passion for leveraging weather data to solve real-world problems. Your success will be the result of your strong business and technical acumen, an experimentation mindset, and a scrappy, self-starting attitude to iterate quickly, deliver value, and unlock growth. 

The impact you'll make:

  • Own the Weather API Product Portfolio: Manage and evolve a diverse set of APIs, including foundational data APIs and vertical-specific solutions, ensuring product-market fit across multiple industries.
  • Product Strategy & Roadmap: Define and execute the vision, strategy, and roadmap for the weather data API and self-serve platform, aligning with business goals and customers' needs.
    API Development & Management: Collaborate with the API platform team to design, develop, and maintenance of a scalable, reliable, and secure weather data API, ensuring high performance, low latency, and robust documentation.
  • Customer Journey Optimization: Drive the design and execution of the end-to-end product lifecycle—from customer discovery to trial, sale, and renewal—ensuring a frictionless journey and best-in-class developer experience.
  • Self-Serve Platform Ownership: Lead the creation and optimization of a self-serve platform that enables developers and businesses to easily discover, access, and integrate the APIs, including features like API key management, usage analytics, and billing.
  • API Marketplace Strategy: Partner with business leadership to execute strategies for API discovery, distribution, and monetization through marketplaces (e.g., AWS Marketplace, Snowflake, RapidAPI) and direct enterprise channels.
  • Cross-Functional Leadership: Collaborate with engineering, marketing, sales, customer success, and partner teams to align roadmap execution and go-to-market strategy.
  • Experimentation & Innovation: Embrace a test-and-learn approach to rapidly experiment, validate concepts, and refine offerings based on data and user feedback.
  • Market & Trend Analysis: Stay at the forefront of API trends, standards, protocols, and competitive landscape to inform product direction, identify new opportunities, and maintain a competitive edge.
  • Metrics & Impact: Define and track KPIs to measure product adoption, customer engagement, API performance, and revenue impact.

What you've accomplished:

  • 5+ years of product management experience, with at least 3 years focused on API products or developer platforms.
  • Proven success in bringing APIs to market, ideally across both direct sales and marketplaces.
  • Proven track record of designing and launching self-serve platforms, including features like user onboarding, subscription management, and analytics dashboards.
  • Experience working with weather data, geospatial data, or similar data-intensive APIs is a strong plus.
  • Strong understanding of API design principles (e.g., RESTful APIs), authentication protocols (OAuth, API keys), and developer tooling.
  • Deep appreciation for the developer experience and technical documentation standards.
  • Business-savvy and commercially minded with experience crafting pricing, packaging, and monetization strategies.
  • Analytical and data-driven; a passion for unpacking customer insights to guide decisions.
  • Excellent communication and collaboration skills; ability to rally cross-functional teams toward a common goal.
  • Scrappy, self-starter attitude with a bias for action and ownership.
  • Passionate about solving customer problems and delivering frictionless, scalable solutions.
  • Experience with API-first companies, SaaS platforms, or data providers.
  • Familiarity with cloud marketplaces and third-party developer ecosystems.
TWCo Benefits/Perks:
  • Flexible Time Off program
  • Hybrid work model
  • A variety of medical insurance options, including a $0 cost premium employee coverage
  • Benefits effective day 1 of employment include a competitive 401K match with no vesting requirement, national health, dental, and vision plans
  • Progressive family plan benefits
  • An opportunity to work for a global and industry-leading technology company
  • Impactful work in a collaborative environment

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall