- Home
- »All Job Categories
- »Data Science Jobs
Auto-apply to these data science jobs
We've scanned millions of jobs. Simply select your favorites, and we can fill out the applications for you.

Posted today

Posted today

Posted today

Posted today

Posted 30+ days ago

Posted today

Posted today

Posted today

Posted today

Posted today

Posted today

Posted today

Posted 5 days ago

Posted today

Posted today

Posted 30+ days ago

Posted 30+ days ago

Posted 30+ days ago

Posted 30+ days ago

Posted 6 days ago

Automate your job search with Sonara.
Submit 10x as many applications with less effort than one manual application.1
Reclaim your time by letting our AI handle the grunt work of job searching.
We continuously scan millions of openings to find your top matches.

Job Description
Job Description:
About the Team
SWIB’s Technology & Data Division is a highly valued partner to our investment teams and a critical contributor to SWIB's continued growth and success. With approximately 75 employees, the division comprises (a) technology, including business systems analysts, design & development, infrastructure, and information security and governance teams; (b) data, including data science, investment data, investment information architecture, data engineering, and reporting; (c) business integration, change, and coordination; and (d) project management.
As a top asset owner, our technology and data systems address the entire breadth of investments. Our mission-critical technology and data are essential to deliver internally managed equity, fixed income, and hedge funds strategies, and to oversee external managers investing in both private and publicly traded assets. The team also runs the systems to operate and protect our organization. A few of our current key systems include SimCorp, eFront, Charles River, Snowflake, Markit, and FactSet.
Position Overview
In this role, you will be responsible for conceptual, architectural, and technical requirements for all aspects of the company’s enterprise data warehouse initiatives. This role intersects data modeling and data architecture to provide next-generation data lake and data warehouse on the cloud technology (Snowflake). The Data Engineer is responsible for the design, development, and maintenance of the enterprise Data Warehouse.
Essential Activities:
- Design, build, and optimize scalable, high-performance data pipelines and cloud-native data warehousing solutions using Snowflake and Azure.
- Utilize advanced SQL capabilities (e.g., CTEs, window functions, set operations, performance tuning) to support complex data transformations and integrations.
- Develop, schedule, and monitor data workflows using Snowflake Streams, Tasks, and Stored Procedures for real-time and batch ETL/ELT processes.
- Implement and maintain CI/CD pipelines using Azure DevOps and GitLab, automating the deployment and testing of data pipeline code and infrastructure.
- Work extensively within the Azure ecosystem, leveraging services like Azure Data Factory, Azure Storage, and Azure Functions to build robust data integration solutions.
- Develop and maintain dimensional and normalized data models (covering 1NF through 3NF) to support flexible and scalable data architecture.
- Create modular, version-controlled data transformation logic that supports automated documentation and testing.
- Ingest and transform data from diverse formats (JSON, CSV, Parquet, Avro) and sources, including APIs and both on-premise and cloud systems.
- Drive data quality and governance by establishing and enforcing standards, validation rules, and best practices for metadata management and data lineage to ensure the accuracy, consistency, and reliability of enterprise data.
- Document technical designs, transformation logic, and architectural decisions clearly and concisely for cross-functional use.
The Ideal Candidate:
- Bachelor's degree in computer science, Engineering, Finance, or a related field.
- Exposure to the investment or financial services industry.
- 6–8 years of experience in data engineering, with a strong emphasis on SQL development, ETL/ELT design, and cloud-based data platforms.
- 2+ years of hands-on experience with Snowflake, including deep knowledge of Streams, Tasks, Procedures, and Snowflake-specific features like virtual warehouses and time travel.
- Possesses a strong commitment to data quality and governance, with a proven ability to establish and enforce standards, define validation rules, and implement best practices for metadata management and data lineage to ensure enterprise data is accurate, consistent, and reliable.
- Strong understanding of data modeling techniques, including both dimensional and normalized models.
- Solid experience working with Azure cloud services for data integration, processing, and orchestration.
- Experience with DBT, or similar tool, for transformation logic, modular pipeline development, and automated testing/documentation.
- Proven ability to manage multiple data sources and formats across complex architectures.
- Strong understanding of CI/CD practices in a data engineering context using Azure DevOps and GitLab.
Optional but Preferred:
- Experience with Python for scripting, orchestration, and automation.
- Familiarity with Data Vault 2.0, Master Data Management (MDM), or enterprise data quality frameworks.
- Snowflake certification (e.g., SnowPro Core or Advanced).
- Competitive total cash compensation, based on AON (formerly McLagan) industry benchmarks
- Comprehensive benefits package
- Educational and training opportunities
- Tuition reimbursement
- Challenging work in a professional environment
- Hybrid work environment