1. Home
  2. »All Job Categories
  3. »Data Science Jobs

Auto-apply to these data science jobs

We've scanned millions of jobs. Simply select your favorites, and we can fill out the applications for you.

Infosys LTD logo
Infosys LTDAlbertville, AL
Job Description Infosys is seeking a hands-on Snowflake Data Architect and Data Vault Modeler. In this role, you will enable digital transformation for our clients in a global delivery model, research on technologies independently, recommend appropriate solutions and contribute to technology-specific best practices and standards. You will be responsible to interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued.to design and implement modern data architectures that enable analytics, AI/ML, and digital transformation. Required Qualifications: Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education. Candidate must be located within commuting distance of Albertville, AL or be willing to relocate to the area Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time At least 7 years of experience in Information Technology. At least 5 years of experience in data engineering/architecture roles with strong hands-on expertise. Experience in Data modeling (Data Vault Modeling) and in-depth knowledge on modeling tools like ERWIN Cloud data platforms (Snowflake preferred; Azure/GCP plus) Experience with Snowflake, SnowSQL, Erwin Familiarity with BI tools like Power BI or Tableau. Preferred Qualifications: Experience with Data Vault Modeling and Enterprise level DWH concepts. Familiarity with ML pipeline integration Certifications: Snowflake Solutions Architect Expert, Data Vault Modeler The job entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email or face to face. Travel may be required as per the job requirements. Along with competitive pay, as a full-time Infosys employee you are also eligible for the following benefits :- Medical/Dental/Vision/Life Insurance Long-term/Short-term Disability Health and Dependent Care Reimbursement Accounts Insurance (Accident, Critical Illness , Hospital Indemnity, Legal) 401(k) plan and contributions dependent on salary level Paid holidays plus Paid Time Off

Posted 30+ days ago

Edelman logo
EdelmanBogota, NJ
We are looking to expand our capabilities with support in global research, analytics, performance, and data consultancy. This role is ideal for someone early in their career who is eager to learn and develop skills in media and social monitoring and analysis. You will gain valuable experience in a dynamic, fast-paced, and highly international environment. Experience 1 to 3 years of experience working in data collection, monitoring, reporting and visualization in advertising/PR agencies or marketing departments. Excellent verbal and written communication skills Strong numeracy and analytical skills Strong knowledge of the Microsoft Suite (Word, PowerPoint, Excel) Good organizational skills and ability to manage tasks effectively. Strong degree of accuracy and attention to detail Understanding North American business culture Knowledge of PR / communications, and marketing Ability to work closely with international stakeholders and adapt to the demands of a global organization Ability to work independently and as part of a team Enjoys working in a dynamic & fast-paced environment English fluency B2 Key responsibilities Data Collection and Management Support the collection of data from various media sources (social media, television, radio, print, digital advertising channels, etc.)Help ensure data accuracy and consistency within projects.Learn and assist in creating basic search queries and taxonomies using Boolean language. Monitoring & Data Analysis Assist in setting up and maintaining alert systems according to project needs (such as taxonomies, frequency, etc.)Support the analysis of media performance metrics like visibility, reach, engagement, conversions, or ROI.Participate in data analysis related to audience behavior, content preferences, and market trends specific to the Americas region. Reporting and Visualization Help prepare clear, actionable reports (daily, weekly, monthly, or quarterly) using visual storytelling.Support the creation and maintenance of live dashboards within media monitoring tools.Assist in customizing reports to meet regional requirements, including language and cultural considerations. If you checked off everything above, apply now and take on this challenge where you can grow professionally, connect with expert teams, receive training locally, regionally, and globally, and LIVE our TRUST culture (open, inclusive, and innovative).

Posted 4 weeks ago

AcuityMD logo
AcuityMDBoston, MA

$175,000 - $200,000 / year

Senior Data Engineer, Data & Intelligence Products AcuityMD is a software and data platform that accelerates access to medical technologies. We help MedTech companies understand how their products are used, why customers vary, and identify opportunities for physicians to better serve their patients. Each year, the FDA approves ~6,000 new medical devices. Our solution helps MedTech companies get these products to physicians more effectively so they can improve patient care with the latest technology. We're backed by Benchmark, Redpoint, ICONIQ Growth, and Ajax Health. We're a high-growth SaaS company scaling rapidly. In this role, you will help lead the evolution of AcuityMD's core healthcare data assets . You will build new data and intelligence products, improve the quality and interpretability of existing data products, and deliver high confidence and repeatable intelligence that fuels our MedTech-specific modules. You will identify new approaches and new data that improve the quality of our data assets. You will report to the Engineering Manager, Data, and work cross-functionally with engineering, product, and commercial organizations. Team Mission The Data Team is on a mission to represent medical reality by transforming raw data into assets that directly generate sales for our customers and help to bring cutting edge medical technology to the patients that need it most. We thrive on the challenge of turning complexity into simplicity and driving the continuous growth of our products. We acquire terabytes of data from diverse sources, and refine them with modern data processing tools and machine learning algorithms. This shapes the future of our products and delivers value to the customers. Responsibilities Transform , model, and integrate raw data of varying quality from a wide range of data sources into usable, documented and high quality data and intelligence products by applying data modeling and statistical techniques Identify, research, and develop new statistical approaches and new types of data to improve and extend our core healthcare data products Lead feature and product development cycles from defining Customer problem statements through to delivering solutions Work directly with product managers and cross-functional stakeholders to influence and build our Product development plans and roadmap Provide thought leadership on data science techniques and mentoring to junior data engineers Document and communicate technical and quantitative concepts, schemas, and data product usage guidelines with appropriate levels of detail for internal and external stakeholders Your Profile You have 6+ years of experience working in a data science or similar role, delivering analytic models as Data Products or Data Solutions You have professional experience working with time series, geospatial, and/or demographic datasets to make forecasts or predictions. You are excited by extracting signals and information from messy, real-world datasets You can translate and explain technical requirements, recommendations and code clearly and concisely for non-technical audiences. You are opinionated about statistical techniques or Python libraries Nice to Haves Familiarity across our data stack - BigQuery, Google Cloud Storage, dbt & Dagster Familiarity with insurance billing and other healthcare datasets Familiarity with incorporating large language models into development workflows AcuityMD is committed to providing highly competitive cash compensation, equity, and benefits. The compensation offered for this role will be based on multiple factors such as location, the role's scope and complexity, and the candidate's experience and expertise, market data and may vary from the range provided. Base salary range: $175,000-$200,000 You must have an eligible work permit in the USA to be considered for this position. We Offer: Ground floor opportunity: Join a high-growth startup, backed by world-class investors across Enterprise SaaS and Medical Devices (Benchmark, Redpoint Ventures, and Ajax Health). Learning Budget: Reimbursements for relevant learning and up-skilling opportunities. Remote work: AcuityMD is committed to supporting full-remote flexibility for employees in the US. We provide a work-from-home stipend for all employees. Flexible PTO: Generous time off and flexible hours give you the freedom to do your best work. Paid Health, Dental, and Vision Plans: We offer 100% paid health, dental, and vision plans for all employees and 75% paid for our employees' dependents. Home Office Stipend: $1,000 to invest in remote office equipment and WiFi reimbursement. Optional Team Retreats: We meet in-person multiple times per year for co-working and social gatherings. Parental Leave: 6-12 weeks of fully-paid, flexible parental leave. Who We Are: The Company: We are builders, who are inspired by our mission to expand patient access to cutting-edge medical technologies. We value working collaboratively to solve hard problems for our customers with simple, innovative solutions. We push ourselves to learn with empathy. We foster an active culture of mentorship and inclusion, and we welcome new team members that share our values. We're backed by Benchmark, Redpoint Ventures, Ajax Health, and several other leading software and medical device investors. Since Acuity launched in 2020, we've brought on customers ranging from publicly traded Fortune 500 companies to innovative growth-stage companies and regional medical device distributors. The Product: AcuityMD uses data and software to help teams collaborate around the complex relationships they have with the users of medical technologies: doctors. Our platform empowers medical technology companies to see how their products are used, understand why outcomes vary, and identify opportunities where physicians or sites of care can better serve their patients. AcuityMD is an Equal Opportunity Employer AcuityMD is seeking to create a diverse work environment because all teams are stronger with different perspectives and life experiences. We strongly encourage people of all backgrounds to apply. We do not discriminate on the basis of race, gender, religion, color, national origin, sexual orientation, age, marital status, veteran status, or disability status. All employees and contractors of AcuityMD are responsible for maintaining a work culture free from discrimination and harassment by treating others with kindness and respect.

Posted 4 weeks ago

Applied Intuition logo
Applied IntuitionSunnyvale, CA

$180,000 - $225,000 / year

About Applied Intuition Applied Intuition is the vehicle intelligence company that accelerates the global adoption of safe, AI-driven machines. Founded in 2017 and now valued at $15 billion following its recent Series F funding round, Applied Intuition delivers the Vehicle OS, Self-Driving System, and toolchain to help customers build intelligent vehicles and shorten time to market. 18 of the top 20 global automakers and major programs across the Department of Defense trust Applied Intuition's solutions to deliver vehicle intelligence. Applied Intuition services the automotive, defense, trucking, construction, mining, and agriculture industries and is headquartered in Mountain View, CA, with offices in Washington, D.C., San Diego, CA, Ft. Walton Beach, FL, Ann Arbor, MI, London, Stuttgart, Munich, Stockholm, Bangalore, Seoul, and Tokyo. Learn more at applied.co. We are an in-office company, and our expectation is that employees primarily work from their Applied Intuition office 5 days a week. However, we also recognize the importance of flexibility and trust our employees to manage their schedules responsibly. This may include occasional remote work, starting the day with morning meetings from home before heading to the office, or leaving earlier when needed to accommodate family commitments. (Note: For EpiSci job openings, fully remote work will be considered by exception.) About the role We are looking for a Staff Data Platform Engineer to shape the strategy, architecture, and execution of our next-generation data ecosystem. In this role, you will partner closely with our autonomy stack teams-the "customers" of our platform-to deeply understand their workflows, pain points, and evolving needs. You will lead the design and development of robust, scalable, and data-intensive distributed systems that power the full lifecycle of autonomous driving data: collection, ingestion, curation, machine learning training, and evaluation. You'll be a key decision-maker in defining best practices, data contracts, and integration patterns across upstream and downstream systems. This is a highly impactful role for someone who thrives at the intersection of technical leadership, system design, and cross-team collaboration, and who wants to elevate the capabilities of our data platform to support cutting-edge ML and autonomy development. At Applied Intuition, you will: Drive Data Platform Strategy- Define the long-term vision and technical roadmap for the data ecosystem, balancing scalability, reliability, cost efficiency, and developer experience Partner with Customers- Engage deeply with autonomy stack teams to gather requirements, uncover pain points, and translate them into platform capabilities Lead Complex Workflow Development- Architect and build end-to-end, large-scale ETL and data workflows for data collection, ingestion, transformation, and delivery Establish Data Contracts- Define and enforce clear SLAs and contracts with upstream data producers and downstream data consumers Set Best Practices- Champion data engineering best practices around governance, schema evolution, lineage, quality, and observability Mentor and Influence- Guide other engineers and teams on designing scalable data systems and making strategic technology choices Collaborate Across Functions- Work with infrastructure, ML platform, autonomy stack, and labeling teams to ensure smooth data flow and ecosystem integration We're looking for someone who has: 10+ years of experience in data engineering, distributed systems, or related backend engineering roles Proven track record of architecting and building large-scale, data-intensive, distributed systems Deep experience with complex ETL pipelines, data ingestion frameworks, and data processing engines (e.g., Spark, Flink, Airflow, Flyte, Kafka, etc) Strong understanding of data modeling, partitioning, schema evolution, and metadata management at scale Hands-on experience with cloud object stores (e.g., AWS S3), lakehouse architectures, and data warehouse technologies Ability to drive technical discussions with both engineers and non-technical stakeholders Strong communication and leadership skills, with the ability to influence across teams and functions Nice to have: Experience supporting ML/AI workflows at scale, from raw data ingestion to model training and evaluation Familiarity with data governance, lineage tracking, and observability tools Experience in autonomous systems, robotics, or other high-volume sensor data domains Contributions to open-source data infrastructure projects Why Join Us? You'll be at the heart of enabling autonomous driving innovation-building the systems that allow teams to harness massive amounts of real-world and simulated data for developing and validating our stack. You'll have the autonomy to set the technical direction, the opportunity to solve some of the hardest data engineering problems, and the ability to make a measurable impact on the success of our platform and products. Compensation at Applied Intuition for eligible roles includes base salary, equity, and benefits. Base salary is a single component of the total compensation package, which may also include equity in the form of options and/or restricted stock units, comprehensive health, dental, vision, life and disability insurance coverage, 401k retirement benefits with employer match, learning and wellness stipends, and paid time off. Note that benefits are subject to change and may vary based on jurisdiction of employment. Applied Intuition pay ranges reflect the minimum and maximum intended target base salary for new hire salaries for the position. The actual base salary offered to a successful candidate will additionally be influenced by a variety of factors including experience, credentials & certifications, educational attainment, skill level requirements, interview performance, and the level and scope of the position. Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the location listed is: $180,000 - $225,000 USD annually. Don't meet every single requirement? If you're excited about this role but your past experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right candidate for this or other roles. Applied Intuition is an equal opportunity employer and federal contractor or subcontractor. Consequently, the parties agree that, as applicable, they will abide by the requirements of 41 CFR 60-1.4(a), 41 CFR 60-300.5(a) and 41 CFR 60-741.5(a) and that these laws are incorporated herein by reference. These regulations prohibit discrimination against qualified individuals based on their status as protected veterans or individuals with disabilities, and prohibit discrimination against all individuals based on their race, color, religion, sex, sexual orientation, gender identity or national origin. These regulations require that covered prime contractors and subcontractors take affirmative action to employ and advance in employment individuals without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status or disability. The parties also agree that, as applicable, they will abide by the requirements of Executive Order 13496 (29 CFR Part 471, Appendix A to Subpart A), relating to the notice of employee rights under federal labor laws.

Posted 30+ days ago

P logo
Plaid Inc.San Francisco, CA

$180,000 - $270,000 / year

We believe that the way people interact with their finances will drastically improve in the next few years. We're dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products. Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use. Plaid's network covers 12,000 financial institutions across the US, Canada, UK and Europe. Founded in 2013, the company is headquartered in San Francisco with offices in New York, Washington D.C., London and Amsterdam. #LI-Hybrid The main goal of the DE team in 2024-25 is to build robust golden data sets to power our business goals of creating more insights based products. Making data-driven decisions is key to Plaid's culture. To support that, we need to scale our data systems while maintaining correct and complete data. We provide tooling and guidance to teams across engineering, product, and business and help them explore our data quickly and safely to get the data insights they need, which ultimately helps Plaid serve our customers more effectively. Data Engineers heavily leverage SQL and Python to build data workflows. We use tools like DBT, Airflow, Redshift, ElasticSearch, Atlanta, and Retool to orchestrate data pipelines and define workflows. We work with engineers, product managers, business intelligence, data analysts, and many other teams to build Plaid's data strategy and a data-first mindset. Our engineering culture is IC-driven -- we favor bottom-up ideation and empowerment of our incredibly talented team. We are looking for engineers who are motivated by creating impact for our consumers and customers, growing together as a team, shipping the MVP, and leaving things better than we found them. You will be in a high impact role that will directly enable business leaders to make faster and more informed business judgements based on the datasets you build. You will have the opportunity to carve out the ownership and scope of internal datasets and visualizations across Plaid which is a currently unowned area that we intend to take over and build SLAs on. You will have the opportunity to learn best practices and up-level your technical skills from our strong DE team and from the broader Data Platform team. You will collaborate with and have strong and cross functional partnerships with literally all teams at Plaid from Engineering to Product to Marketing/Finance etc. Responsibilities Understanding different aspects of the Plaid product and strategy to inform golden dataset choices, design and data usage principles. Have data quality and performance top of mind while designing datasetsLeading key data engineering projects that drive collaboration across the company. Advocating for adopting industry tools and practices at the right timeOwning core SQL and python data pipelines that power our data lake and data warehouse. Well-documented data with defined dataset quality, uptime, and usefulness. Qualifications 4+ years of dedicated data engineering experience, solving complex data pipelines issues at scale. You've have experience building data models and data pipelines on top of large datasets (in the order of 500TB to petabytes) You value SQL as a flexible and extensible tool, and are comfortable with modern SQL data orchestration tools like DBT, Mode, and Airflow. You have experience working with different performant warehouses and data lakes; Redshift, Snowflake, Databricks. You have experience building and maintaining batch and realtime pipelines using technologies like Spark, Kafka. You appreciate the importance of schema design, and can evolve an analytics schema on top of unstructured data. You are excited to try out new technologies. You like to produce proof-of-concepts that balance technical advancement and user experience and adoption. You like to get deep in the weeds to manage, deploy, and improve low level data infrastructure. You are empathetic working with stakeholders. You listen to them, ask the right questions, and collaboratively come up with the best solutions for their needs while balancing infra and business needs. You are a champion for data privacy and integrity, and always act in the best interest of consumers. $180,000 - $270,000 a year The target base salary for this position ranges from $180,000/year to $270,000/year in Zone 1. The target base salary will vary based on the job's location. Our geographic zones are as follows: Zone 1 - New York City and San Francisco Bay Area Zone 2 - Los Angeles, Seattle, Washington D.C. Zone 3 - Austin, Boston, Denver, Houston, Portland, Sacramento, San Diego Zone 4 - Raleigh-Durham and all other US cities Additional compensation in the form(s) of equity and/or commission are dependent on the position offered. Plaid provides a comprehensive benefit plan, including medical, dental, vision, and 401(k). Pay is based on factors such as (but not limited to) scope and responsibilities of the position, candidate's work experience and skillset, and location. Pay and benefits are subject to change at any time, consistent with the terms of any applicable compensation or benefit plans. Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable. We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn't fully match the job description. We are always looking for team members that will bring something unique to Plaid! Plaid is proud to be an equal opportunity employer and values diversity at our company. We do not discriminate based on race, color, national origin, ethnicity, religion or religious belief, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, military or veteran status, disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local laws. Plaid is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance with your application or interviews due to a disability, please let us know at [email protected] Please review our Candidate Privacy Notice here.

Posted 30+ days ago

Sprinter Health logo
Sprinter HealthMenlo Park, CA

$180,000 - $230,000 / year

About Sprinter Health At Sprinter Health, our mission is reimagining how people access care by bringing it directly to their homes. Nearly 30% of patients in the U.S. skip preventive or chronic care simply because they can't get to a doctor's office. For many, the ER becomes their first touchpoint with the healthcare system-driving over $300B in avoidable costs every year. By using the same technologies that power leading marketplace and last-mile platforms, we deliver care where people are, especially those who need it most. So far, we've supported more than 2 million patients across 22 states, completed 130,000+ in-home visits, and maintained a 92 NPS. Our team of clinicians, technologists, and operators have raised over $125M to date investors like a16z, General Catalyst, GV, and Accel and enjoy multi-year runway. Data is core to how we deliver care, allocate clinicians, and understand population risk-and we're now building the team to scale that impact. About the Role We're hiring a Senior / Staff Data Scientist to tackle high-impact problems across logistics, patient behavior, clinical operations, and population health. This is a 0→1 role where your work directly shapes how care is delivered, optimized, and scaled. You'll turn messy, real-world healthcare data into insights that drive product direction, operational efficiency, and patient outcomes. If you like asking hard questions and using data to solve problems that matter in the real world, this is that opportunity. Office Location We are a hybrid company based in the Bay Area with offices in both San Francisco and Menlo Park. We care about work-life balance and understand that there will be times where flexibility is needed. What you will do Explore clinical, operational, and patient engagement data to surface actionable insights Analyze routing, capacity planning, and supply-demand dynamics across regions Identify drivers of preventive care engagement and patient follow-through Predict cancellations, risk, and future care needs using behavioral and clinical signals Design and evaluate A/B experiments across product, growth, and operations Support internal teams with dashboards, analytics, and ad hoc decisions Help shape the Data function's standards, tooling, and processes as an early leader What you have done 5+ years in data science, advanced analytics, or applied data modeling Worked with large, messy datasets to answer ambiguous, high-stakes questions Built dashboards or visualizations using Looker, Tableau, Grafana, Superset, or similar Written high-quality SQL across warehouses like BigQuery, Snowflake, or Redshift Influenced product, ops, or strategy decisions through data insight and storytelling Presented findings to leadership and collaborated with cross-functional teams What gives you an edge Experience with Python, Pandas, Scikit-learn, or ELK (Elasticsearch + Kibana) Exposure to GCP data tools like BigQuery, DataFlow, or DataForm Familiarity with healthcare data (claims, HL7, CCDAs, HIEs) Experience with logistics, routing, or operational optimization problems Designed or analyzed growth experiments or user behavior funnels Communicated insights to C-level stakeholders or mentored junior talent Our Tech Stack Python SQL Superset, Looker, Tableau, Grafana BigQuery, Snowflake, Redshift Elasticsearch / Kibana FHIR, HL7, claims data (bonus) Pandas, Scikit-learn, DataForm, DataFlow What we offer Meaningful pre-IPO equity Medical, dental, and vision plans 100% paid for you and your dependents Flexible PTO + 10 paid holidays per year 401(k) with match 16-week parental leave policy for birthing parent, 8 weeks for all other parents HSA + FSA contributions Life insurance, plus short and long-term disability coverage Free daily lunch in-office Annual learning stipend $180,000 - $230,000 a year Sprinter Health is an equal opportunity employer. We value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, disability status or other protected classes. Beware of recruitment fraud and scams that involve fictitious job descriptions followed by false job offers. If you are applying for a job, you can confirm the legitimacy of a job posting by viewing current open roles here. All legitimate job postings will require an application to be made directly on our official Sprinter Health Careers website. Job-related communications will only be sent from email addresses ending in @sprinterhealth.com. Please ensure that you're only replying to emails that end with @sprinterhealth.com.

Posted 3 weeks ago

C logo
Castleton Commodities International LLCHouston, TX
Castleton Commodities International (CCI) is seeking a Senior Data Operations Analyst to help support the day-to-day management, data quality, and continuous enhancement of the firm's enterprise Master Data Management platform, TIBCO EBX. You will work with an offshore support team, business data owners, and technology partners to make certain that master data is complete, trusted, and readily available to trading, risk, finance, and analytics applications. Your remit spans monitoring platform health, troubleshooting data issues, coordinating the ServiceNow work queue, contributing to data model designs, and validating releases delivered through a structured SDLC (Jira). Responsibilities: Platform & Data Operations Act as a senior SME and L2/L3 support resource for TIBCO EBX, collaborating with an offshore team to ensure appropriate coverage. Enter and maintain master/reference data in EBX, enforcing stewardship workflows and governance rules. Monitor application jobs, security, and integrations; escalate issues and document resolution steps. Manage ServiceNow work queue: triage, prioritize, assign, and track incidents, enhancements, and service requests against defined SLAs. Data Design & Quality Partner with data owners to design and extend EBX data models, hierarchies, validation rules, and stewardship workflows. Investigate and resolve data errors surfaced by downstream systems or data quality rules; perform root-cause analysis and propose sustainable fixes. Develop data quality dashboards (Omni/Power BI) to track KPIs such as completeness, duplication, and timeliness. Release & Change Management Coordinate with engineering to test EBX configuration changes, code deployments, and version upgrades. Author and execute regression and user-acceptance test (UAT) scripts; validate mappings between EBX and consuming systems (REST/SOAP, SQL, Kafka, etc.). Champion change-control best practices, ensuring all stories and tasks are effectively managed in Jira from requirements through deployment. Continuous Improvement & Collaboration Analyze recurring data defects to recommend automation or rule enhancements that reduce manual touch points. Deliver training and knowledge-transfer sessions for end-users and offshore analysts on EBX workflows and best practices. Support audit, compliance, and SOX requests related to MDM operational controls. Qualifications: 5 + years of hands-on experience operating or supporting a commercial MDM platform (TIBCO EBX preferred; Informatica, Reltio, SAP MDG, etc. are acceptable). Solid grasp of core MDM concepts: golden-record management, hierarchy/versioning, data quality rules, stewardship workflows, matching/merging, and reference-data integration. Demonstrated experience managing or supporting operational queues in ServiceNow and project backlogs in Jira. Proficiency in SQL and one scripting language (Java, Python, or similar) for data investigation and automation. Proven record of partnering with offshore or managed-service teams, including defining SLAs and run-books. Strong analytical and problem-solving skills; ability to translate data symptoms into root cause across complex data flows. Excellent written and verbal communication skills, comfortable interfacing with both technical teams and front-office stakeholders. Must be able to work effectively in a fast-paced, dynamic and high-intensity environment including open-floor plan if applicable to the position, with timely responsiveness and the ability to work beyond normal business hours when required. Preferred Qualifications: Prior exposure to energy-trading or commodity-trading reference data. Experience configuring EBX data models, workflows, validation rules, and user roles. Familiarity with data-catalog/governance tooling (Collibra, Alation, Atlan) and their integration with MDM. Knowledge of API integrations (REST/SOAP), message queues (Kafka), and cloud data platforms (Azure Synapse, Amazon Redshift, Databricks). ITIL or similar service-management certification. Employee Programs & Benefits: CCI offers competitive benefits and programs to support our employees, their families and local communities. These include: Competitive comprehensive medical, dental, retirement and life insurance benefits Employee assistance & wellness programs Parental and family leave policies CCI in the Community: Each office has a Charity Committee and as a part of this program employees are allocated 2 days annually to volunteer at the selected charities. Charitable contribution match program Tuition assistance & reimbursement Quarterly Innovation & Collaboration Awards Employee discount program, including access to fitness facilities Competitive paid time off Continued learning opportunities Visit https://www.cci.com/careers/life-at-cci/ # to learn more! #LI-CD1

Posted 4 weeks ago

P logo
Pony AIFremont, CA
Founded in 2016 in Silicon Valley, Pony.ai has quickly become a global leader in autonomous mobility and is a pioneer in extending autonomous mobility technologies and services at a rapidly expanding footprint of sites around the world. Operating Robotaxi, Robotruck and Personally Owned Vehicles (POV) business units, Pony.ai is an industry leader in the commercialization of autonomous driving and is committed to developing the safest autonomous driving capabilities on a global scale. Pony.ai's leading position has been recognized, with CNBC ranking Pony.ai #10 on its CNBC Disruptor list of the 50 most innovative and disruptive tech companies of 2022. In June 2023, Pony.ai was recognized on the XPRIZE and Bessemer Venture Partners inaugural "XB100" 2023 list of the world's top 100 private deep tech companies, ranking #12 globally. As of August 2023, Pony.ai has accumulated nearly 21 million miles of autonomous driving globally. Pony.ai went public at NASDAQ in November 2024. Responsibilities Accurately annotate diverse self-driving vehicle data, including: Point Cloud 3D Bounding Boxes: Precisely define objects in 3D point cloud data. Image Bounding Boxes: Create 2D bounding boxes around objects in images. Image Segmentation: Perform pixel-level segmentation of objects in images. Adhere strictly to annotation guidelines and quality standards. Provide feedback on annotation tool features and data quality issues. Collaborate with the data science and engineering teams to refine annotation processes. Maintain high levels of productivity and accuracy in a fast-paced environment.

Posted 30+ days ago

C logo
Corebridge Financial Inc.Houston, TX

$130,000 - $150,000 / year

Who We Are At Corebridge Financial, we believe action is everything. That's why every day we partner with financial professionals and institutions to make it possible for more people to take action in their financial lives, for today and tomorrow. We align to a set of Values that are the core pillars that define our culture and help bring our brand purpose to life: We are stronger as one: We collaborate across the enterprise, scale what works and act decisively for our customers and partners. We deliver on commitments: We are accountable, empower each other and go above and beyond for our stakeholders. We learn, improve and innovate: We get better each day by challenging the status quo and equipping ourselves for the future. We are inclusive: We embrace different perspectives, enabling our colleagues to make an impact and bring their whole selves to work. Who You'll Work With The Information Technology organization is the technological foundation of our business and works in collaboration with our partners from across the company. The team drives technology and digital transformation, partners with business leaders to design and execute new strategies through IT and operations services and ensures the necessary IT risk management and security measures are in place and aligned with enterprise architecture standards and principles. About The Role We are seeking a highly skilled Data Modeler with a strong background in the insurance industry to join our growing Data & Analytics team. This role is critical in shaping how we manage, store, and leverage data across the organization to drive insights and operational excellence. You will design, develop, and maintain logical and physical data models that support business intelligence, analytics, and operational reporting with a strong focus on insurance-specific data structures such as policy, claims, underwriting, billing, and reinsurance. The candidate will be part of the Information Management organization responsible for architecting and developing a Data Integration Hub enabling Corebridge to leverage data as an asset by providing a single authoritative view of the business, providing a layer of separation between our complex data sources and our data consumers; thereby, providing each data layer to evolve independently. Responsibilities Architect, design, and conceptually, logically, and physically model data that exists in sourcing systems, data lakes, data warehouses, and data marts. Design complex insurance industry standard data models based on the ACORD principals. Develop and enforce data modeling standards, best practices, and metadata management processes. Develop conceptual subject area data models and data flow diagrams. Participate in data governance initiatives and ensure data integrity, consistency, and quality Create and maintain data dictionaries, ER diagrams, and mapping documentation. Develop and maintain metadata definitions for sourcing data stores and all evolutions of the Data Lake from ingested sources to consumers. Communicate, present, and apply data modeling best practices to support application performance, application storage efficiency, and system extensibility. Work closely with administrators, architects, and application teams to insure applications are performing well based on modeling designs deployed. Facilitate data requirements meetings with stakeholders to drive modeling solutions. Develop and maintain comprehensive data model documentation and metadata repositories. Assist developers with query development and performance optimization. Work closely with Management and Data Analyst teams to achieve company business objectives. Collaborate with other technology teams and architects to define and develop solutions. This is a hands-on role. Skills and Qualifications Bachelor's degree in Information Technology or related field preferred, or equivalent work experience. 10+ years of experience in the IT Industry. The position calls for a seasoned IT professional with senior leadership qualities and background with a minimum of 10 years' experience in IT. 8+ years of experience building and managing complex data model solutions. 4+ years of experience with distributed, highly-scalable, multi-node environments utilizing Big Data ecosystems highly preferred. Strong working knowledge of ACORD. Deep understanding of Life Insurance and Retirement (Individual and Group) industry space preferred. Excellent technical and organizational skills. Familiarity with data governance frameworks (Collibra, Informatica, Erwin). Strong communication and team skills. Proficiencies with Agile development practices. Experience with relational and dimensional data modeling. Experience with canonical modeling for enterprise integration solutions. Strong knowledge of reporting modeling concepts, including dimensional, snowflakes, slowly changing dimensions, and surrogate, compound and intelligent keys. Experience with cloud platforms like AWS, Azure. Experience in using data modeling tools like Erwin. Strong SQL skills and experience working with relational/dimensional databases (e.g., Oracle, SQL Server, Snowflake) and no-SQL databases like Cassandra/AWS Dynamo DB. Familiarity with data warehousing concepts and ETL processes Compensation The anticipated salary range for this position is $130,000 to $150,000 at the commencement of employment for the Jersey City, NJ area. Not all candidates will be eligible for the upper end of the salary range. The actual compensation offered will ultimately be dependent on multiple factors, which may include the candidate's geographic location, skills, experience and other qualifications. In addition, the position is eligible for a discretionary bonus in accordance with the terms of the applicable incentive plan. Corebridge also offers a range of competitive benefits as part of the total compensation package, as detailed below. Work Location This position is based in Corebridge Financial's Houston, TX and Jersey City, NJ office and is subject to our hybrid working policy, which gives colleagues the benefits of working both in an office and remotely. Estimated Travel May include up to 25%. #LI-SAFG #LI-CW1 #LI-Hybrid Why Corebridge? At Corebridge Financial, we prioritize the health, well-being, and work-life balance of our employees. Our comprehensive benefits and wellness program is designed to support employees both personally and professionally, ensuring that they have the resources and flexibility needed to thrive. Benefit Offerings Include: Health and Wellness: We offer a range of medical, dental and vision insurance plans, as well as mental health support and wellness initiatives to promote overall well-being. Retirement Savings: We offer retirement benefits options, which vary by location. In the U.S., our competitive 401(k) Plan offers a generous dollar-for-dollar Company matching contribution of up to 6% of eligible pay and a Company contribution equal to 3% of eligible pay (subject to annual IRS limits and Plan terms). These Company contributions vest immediately. Employee Assistance Program: Confidential counseling services and resources are available to all employees. Matching charitable donations: Corebridge matches donations to tax-exempt organizations 1:1, up to $5,000. Volunteer Time Off: Employees may use up to 16 volunteer hours annually to support activities that enhance and serve communities where employees live and work. Paid Time Off: Eligible employees start off with at least 24 Paid Time Off (PTO) days so they can take time off for themselves and their families when they need it. Eligibility for and participation in employer-sponsored benefit plans and Company programs will be subject to applicable law, governing Plan document(s) and Company policy. We are an Equal Opportunity Employer Corebridge Financial, is committed to being an equal opportunity employer and we comply with all applicable federal, state, and local fair employment laws. All applicants will be considered for employment based on job-related qualifications and without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, disability, neurodivergence, age, veteran status, or any other protected characteristic. The Company is also committed to compliance with all fair employment practices regarding citizenship and immigration status. At Corebridge Financial, we believe that diversity and inclusion are critical to building a creative workplace that leads to innovation, growth, and profitability. Through a wide variety of programs and initiatives, we invest in each employee, seeking to ensure that our colleagues are respected as individuals and valued for their unique perspectives. Corebridge Financial is committed to working with and providing reasonable accommodations to job applicants and employees, including any accommodations needed on the basis of physical or mental disabilities or sincerely held religious beliefs. If you believe you need a reasonable accommodation in order to search for a job opening or to complete any part of the application or hiring process, please send an email to TalentandInclusion@corebridgefinancial.com. Reasonable accommodations will be determined on a case-by-case basis, in accordance with applicable federal, state, and local law. We will consider for employment qualified applicants with criminal histories, consistent with applicable law. To learn more please visit: www.corebridgefinancial.com Functional Area: DT - Data Estimated Travel Percentage (%): No Travel Relocation Provided: No American General Life Insurance Company

Posted 30+ days ago

Guidehouse logo
GuidehouseArlington, VA
Job Family: Data Science Consulting Travel Required: Up to 25% Clearance Required: Ability to Obtain Public Trust What You Will Do: We are seeking an experienced Data Engineer to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical contributor, responsible for designing and implementing scalable data pipelines and interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation. This is a strategic delivery role for someone who thrives at the intersection of data engineering, cloud platforms, and public sector analytics. Client Leadership & Delivery Collaborate with FCA clients to understand data architecture and reporting needs. Lead the development of ETL pipelines and dashboard integrations using Databricks and Tableau. Ensure delivery excellence and measurable outcomes across data migration and visualization efforts. Solution Development & Innovation Design and implement scalable ETL/ELT pipelines using Spark, SQL, and Python. Develop and optimize Tableau dashboards aligned with federal reporting standards. Apply AI/ML tools to automate metadata extraction, clustering, and dashboard scripting. Practice & Team Leadership Work closely with data architects, analysts, and cloud engineers to deliver integrated solutions. Support documentation, testing, and deployment of data products. Mentor junior developers and contribute to reusable frameworks and accelerators. What You Will Need: US Citizenship is required Bachelor's degree is required Minimum FIVE (5) years of experience in data engineering and dashboard development Proven experience with Databricks, Tableau, and cloud platforms (AWS, Azure, GCP) Strong proficiency in SQL, Python, and Spark Experience building ETL pipelines and integrating data sources into reporting platforms Familiarity with data governance, metadata, and compliance frameworks Excellent communication, facilitation, and stakeholder engagement skills What Would Be Nice To Have: Master's Degree AI/LLM Certifications Experience working with FCA clients such as DOT, GSA, USDA, or similar Familiarity with federal contracting and procurement processes What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.

Posted 30+ days ago

Truveta logo
TruvetaSeattle, WA

$105,000 - $130,000 / year

Healthcare Data Analyst, Data Ecosystem Team Truveta is the world's first health provider led data platform with a vision of Saving Lives with Data. Our mission is to enable researchers to find cures faster, empower every clinician to be an expert, and help families make the most informed decisions about their care. Achieving Truveta' s ambitious vision requires an incredible team of talented and inspired people with a special combination of health, software and big data experience who share our company values. Truveta was born in the Pacific Northwest, but we have employees who live across the country. Our team enjoys the flexibility of a hybrid model and working from anywhere. In person attendance is required for one week during the year for Truveta Planning Week. For overall team productivity, we optimize meeting hours in the pacific time zone. We avoid scheduling recurring meetings that start after 3pm PT, however, ad hoc meetings occur between 8am-6pm Pacific time. #LI-remote Who We Need Truveta is rapidly building a talented and diverse team to tackle complex health and technical challenges. Beyond core capabilities, we are seeking problem solvers, passionate and collaborative teammates, and those willing to roll up their sleeves while making a difference. If you are interested in the opportunity to pursue purposeful work, join a mission-driven team, and build a rewarding career while having fun, Truveta may be the perfect fit for you. This Opportunity As part of the Ecosystem division, the newly formed Healthcare Analytics team is central to delivering on Truveta's mission by empowering health system clinical and administrative leaders to measure, learn, and improve. We are building exemplary metrics, dashboards, and benchmarks that inspire adoption of Truveta by our member health systems. The Healthcare Analytics team is looking for a Healthcare Data Analyst who thrives at the intersection of EHR data expertise, rigorous analytics qualifications, and collaborative problem solving. You will play a critical role in creating high-quality analytic outputs that health systems can adopt and customize to improve care quality, population health, clinical operations, and financial sustainability, solving consequential problems in healthcare using an EHR database of ~120 million patients (and growing!), while positively impacting patient outcomes. This role is ideal for someone with hands-on experience working with EHR data, strong data wrangling skills, and a passion for turning data into meaningful insight that resonates with clinicians, health system executives, and operational leaders. As a Healthcare Data Analyst, you will have the opportunity to translate complex clinical and claims data into clear, defensible evidence that supports member initiatives in safety, quality, cost reduction, and growth. Responsibilities Develop iconic analytic outputs (studies, dashboards, benchmarks) that demonstrate Truveta's unique value and inspire members to replicate, customize, and apply insights to address common, high-priority health system challenges. Wrangle large-scale healthcare datasets and build reproducible queries using SQL, R, and/or Python to scope analytic use cases, assess feasibility, and deliver studies and dashboards within agreed timelines, while developing subject matter expertise in Truveta's proprietary coding language and analytics platform. Engage with clinical, quality, and operational leaders by delivering case studies, interactive demos, and analytic output that showcase Truveta's differentiated capabilities and highlight how Truveta can impact healthcare's mission and margin objectives. Collaborate closely with cross-functional teams to validate data quality, investigate issues, and provide feedback that informs Truveta's product roadmap. Use AI thoughtfully and strategically to spark new ideas and tackle problems. Apply AI to speed feedback loops, test hypotheses, and deliver insights faster, while balancing judgment, creativity, and an awareness of its limitations. Required Skills Undergraduate or graduate (preferred) education in data analysis, clinical informatics, epidemiology, public health, or a related field. Experience working with large relational database consisting of millions of patients' records. Experience building dashboards, benchmarks, or metrics to achieve measurable improvement in health system operations, quality outcomes, or population health. 2+ years of experience wrangling and analyzing EHR data or other real-world data sources using SQL, R and Python. Knowledge of clinical terminologies such as ICD, SNOMED, LOINC, RxNorm, or NDC. A willingness to learn new coding languages including internal proprietary coding language to analyze data and build cohorts. Experience translating healthcare and operational concepts into analytic workflows. Strong communication skills to present insights and results to both technical and non-technical audiences. Ability to learn and adapt quickly in a dynamic start-up environment. Preferred Qualifications These qualifications are preferred but not required, please do not let them stop you from applying for this role. You will likely get the opportunity to learn how to do these more advanced analyses if you don't already have experience with them. Experience working with unstructured clinical data, natural language processing outputs, or AI/ML tools Knowledge of distributed computing platforms (Spark) and associated data analysis languages (Spark SQL, PySpark, SparkR) Experience building cohort definitions, defining metrics, and interpreting analytic findings Why Truveta? Be a part of building something special. Now is the perfect time to join Truveta. We have strong, established leadership with decades of success. We are well-funded. We are building a culture that prioritizes people and their passions across personal, professional and everything in between. Join us as we build an amazing company together. We Offer: Interesting and meaningful work for every career stage Great benefits package Comprehensive benefits with strong medical, dental and vision insurance plans 401K plan Professional development & training opportunities for continuous learning Work/life autonomy via flexible work hours and flexible paid time off Generous parental leave Regular team activities (virtual and in-person as soon as we are able) The base pay for this position is $105,000 to $130,000. The pay range reflects the minimum and maximum target. Pay is based on several factors including location and may vary depending on job-related knowledge, skills, and experience. Certain roles are eligible for additional compensation such as incentive pay and stock options. If you are based in California, we encourage you to read this important information for California residents linked here. Truveta is committed to creating a diverse, inclusive, and empowering workplace. We believe that having employees, interns, and contractors with diverse backgrounds enables Truveta to better meet our mission and serve patients and health communities around the world. We recognize that opportunities in technology historically excluded and continue to disproportionately exclude Black and Indigenous people, people of color, people from working class backgrounds, people with disabilities, and LGBTQIA+ people. We strongly encourage individuals with these identities to apply even if you don't meet all of the requirements. Please note that all applicants must be authorized to work in the United States for any employer as we are unable to sponsor work visas or permits (e.g. F-1 OPT, H1-B) at this time. We appreciate your interest in the position and encourage you to explore future opportunities with us.

Posted 30+ days ago

Guidehouse logo
GuidehouseMclean, VA
Job Family: Data Science Consulting Travel Required: Up to 25% Clearance Required: Ability to Obtain Public Trust What You Will Do: We are seeking an experienced Data Engineer to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical contributor, responsible for designing and implementing scalable data pipelines and interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation. This is a strategic delivery role for someone who thrives at the intersection of data engineering, cloud platforms, and public sector analytics. Client Leadership & Delivery Collaborate with FCA clients to understand data architecture and reporting needs. Lead the development of ETL pipelines and dashboard integrations using Databricks and Tableau. Ensure delivery excellence and measurable outcomes across data migration and visualization efforts. Solution Development & Innovation Design and implement scalable ETL/ELT pipelines using Spark, SQL, and Python. Develop and optimize Tableau dashboards aligned with federal reporting standards. Apply AI/ML tools to automate metadata extraction, clustering, and dashboard scripting. Practice & Team Leadership Work closely with data architects, analysts, and cloud engineers to deliver integrated solutions. Support documentation, testing, and deployment of data products. Mentor junior developers and contribute to reusable frameworks and accelerators. What You Will Need: US Citizenship is required Bachelor's degree is required Minimum FIVE (5) years of experience in data engineering and dashboard development Proven experience with Databricks, Tableau, and cloud platforms (AWS, Azure, GCP) Strong proficiency in SQL, Python, and Spark Experience building ETL pipelines and integrating data sources into reporting platforms Familiarity with data governance, metadata, and compliance frameworks Excellent communication, facilitation, and stakeholder engagement skills What Would Be Nice To Have: Master's Degree AI/LLM Certifications Experience working with FCA clients such as DOT, GSA, USDA, or similar Familiarity with federal contracting and procurement processes What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.

Posted 30+ days ago

CZ Biohub logo
CZ BiohubRedwood City, CA

$241,000 - $331,100 / year

Biohub is leading the new era of AI-powered biology to cure or prevent disease through its 501c3 medical research organization, with the support of the Chan Zuckerberg Initiative. The Team Biohub supports the science and technology that will make it possible to help scientists cure, prevent, or manage all diseases by the end of this century. While this may seem like an audacious goal, in the last 100 years, biomedical science has made tremendous strides in understanding biological systems, advancing human health, and treating disease. Achieving our mission will only be possible if scientists are able to better understand human biology. To that end, we have identified four grand challenges that will unlock the mysteries of the cell and how cells interact within systems - paving the way for new discoveries that will change medicine in the decades that follow: Building an AI-based virtual cell model to predict and understand cellular behavior Developing novel imaging technologies to map, measure and model complex biological systems Creating new tools for sensing and directly measuring inflammation within tissues in real time.tissues to better understand inflammation, a key driver of many diseases Harnessing the immune system for early detection, prevention, and treatment of disease The Opportunity At Biohub, we are generating unprecedented scientific datasets that drive biological modeling innovation: Billions of standardized cells of single-cell transcriptomic data, with a focus on measuring genetic and environmental perturbations 10s of thousands of donor-matched DNA & RNA samples PB-scale static and dynamic imaging datasets TB-scale mass spectrometry datasets Diverse, large multi-modal biological datasets that enable biological bridges across measurement types and facilitate multi-modal model training to define how cells act. After model training, we make all data products available through public resources like CELLxGENE Discover and the CryoET Portal, used by tens of thousands of scientists monthly to advance understanding of genetic variants, disease risk, drug toxicities, and therapeutic discovery. As a Senior Staff Data Scientist, you'll lead the creation of groundbreaking imaging datasets that decode cellular function at the molecular level, describe development, and predict responses to genetic or environmental changes. Working at the intersection of data science, biology, and AI, you'll define data needs, format standards, analysis approaches, quality metrics, and our technical strategy, creating systems to ingest, transform, and validate and deploy data products. Success for this role means delivering high-quality, usable datasets that directly address modeling challenges and accelerate scientific progress. Join us in building the data foundation that will transform our understanding of human biology and move us along the path to curing, preventing, and managing all disease. What You'll Do Define the technical strategy for a robust imaging data ecosystem and build data ingestion pipelines, define data formats, write validation tools, QC metrics, and analysis pipelines. Collaborate with ML engineers, AI Researchers, and Data Engineers to iteratively evaluate, refine and grow datasets to maximize model performance. Discover and define new data generation opportunities, and manage the delivery of those data products to our AI team. Collaborate with engineers, product managers, UX designers, and other data scientists to publish valuable datasets as part of CZI's open data ecosystem. What You'll Bring 10+ years of experience with large scale biological imaging data. Demonstrated delivery of multiple large biological data products. Experience with big data: extraction, transport, loading, databases, standardization, validation, QC, and analysis. Experience with processing and orchestration pipelines, such as Argo Workflows, Databricks Strong fundamentals in statistical reasoning and machine learning. Experience with biological data analysis and QC best practices Excellent written and verbal communication skills. Enthusiasm to ramp up on technologies and learn new domains. Experience working in a multidisciplinary environment (engineering, product, AI Research). Compensation The Redwood City, CA base pay range for a new hire in this role is $241,000 - $331,100. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process. Better Together As we grow, we're excited to strengthen in-person connections and cultivate a collaborative, team-oriented environment. This role is a hybrid position requiring you to be onsite for at least 60% of the working month, approximately 3 days a week, with specific in-office days determined by the team's manager. The exact schedule will be at the hiring manager's discretion and communicated during the interview process. Benefits for the Whole You We're thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible. Provides a generous employer match on employee 401(k) contributions to support planning for the future. Paid time off to volunteer at an organization of your choice. Funding for select family-forming benefits. Relocation support for employees who need assistance moving If you're interested in a role but your previous experience doesn't perfectly align with each qualification in the job description, we still encourage you to apply as you may be the perfect fit for this or another role. #LI-Hybrid

Posted 30+ days ago

Geico Insurance logo
Geico InsuranceChevy Chase, MD

$88,150 - $157,850 / year

At GEICO, we offer a rewarding career where your ambitions are met with endless possibilities. Every day we honor our iconic brand by offering quality coverage to millions of customers and being there when they need us most. We thrive through relentless innovation to exceed our customers' expectations while making a real impact for our company through our shared purpose. When you join our company, we want you to feel valued, supported and proud to work here. That's why we offer The GEICO Pledge: Great Company, Great Culture, Great Rewards and Great Careers. GEICO is looking for a customer-obsessed and results-oriented Product Manager to support our Data Ingestion and Movement platform. This role will help drive product direction for our data ingestion, ETL/ELT pipelines, and data movement services, focusing on enabling reliable data flow into our lakehouse and other data stores. The ideal candidate will have a technical background in data engineering and experience delivering scalable data platforms and data pipeline solutions. Description As a Product Manager for Data Ingestion and Movement, you will be responsible for supporting the product vision and execution for GEICO's data ingestion and movement products. To successfully shape a platform that enables pipeline-as-a-service and supports a scalable data mesh architecture, a strong technical understanding of data pipelines, data integration patterns, data orchestration, ETL/ELT processes, and platform engineering is essential. Your goal is to abstract complexity and empower domain teams to autonomously and efficiently build, deploy, and govern data pipelines. This role also requires stakeholder management skills and the ability to bridge technical solutions with business value. Key Responsibilities Support the development and execution of data ingestion and movement platform vision aligned with business goals and customer needs Help create and maintain a clear, prioritized roadmap for data ingestion and movement capabilities that balances short-term delivery with long-term strategic objectives Support evangelizing the Data Ingestion and Movement platform across the organization and help drive stakeholder alignment Stay abreast of industry trends and competitive landscape (Apache Kafka, Apache Airflow, AWS Glue, Azure Data Factory, Google Cloud Dataflow, etc.) to inform data ingestion strategy Support requirement gathering and product strategy for data ingestion, ETL/ELT pipelines, and data movement services Understand end-to-end data ingestion workflows and how data movement fits into the broader data ecosystem and downstream analytics Support data governance initiatives for data lineage, quality, and compliance in data ingestion and movement processes Ensure data ingestion and movement processes adhere to regulatory, compliance, and data quality standards Partner with engineering on the development of data ingestion tools, pipeline orchestration services, and data movement capabilities Help define product capabilities for data ingestion, pipeline monitoring, error handling, and data quality validation to improve reliability and performance Support customer roadshows and training on data ingestion and movement capabilities Build instrumentation and observability into data ingestion and movement tools to enable data-driven product decisions and pipeline monitoring Work closely with engineering, data engineering, and data teams to ensure seamless delivery of data ingestion and movement products Partner with customer success, support, and engineering teams to create clear feedback loops Translate data ingestion and movement technical capabilities into business value and user benefits Support alignment across multiple stakeholders and teams in complex, ambiguous environments Qualifications Required Understanding of data ingestion patterns, ETL/ELT processes, and data pipeline architectures (Apache Kafka, Apache Airflow, Apache Spark, AWS Glue, etc.) Experience with data integration APIs, connectors, and data pipeline orchestration tools Basic understanding of data pipeline monitoring, observability, and data quality validation practices Experience in cloud data ecosystems (AWS, GCP, Azure) Proven analytical and problem-solving abilities with a data-driven approach to decision-making Experience working with Agile methodologies and tools (JIRA, Azure DevOps) Good communication, stakeholder management, and cross-functional collaboration skills Strong organizational skills with ability to manage product backlogs Preferred Previous experience as a software or data engineer is a plus Strong business acumen to prioritize features based on customer value and business impact Experience with data ingestion tools (Apache Kafka, Apache NiFi, AWS Kinesis, Azure Event Hubs, etc.) Knowledge of data lineage, data quality frameworks, and compliance requirements for data ingestion Insurance industry experience Experience Minimum 5+ years of technical product management experience building platforms that support data ingestion, ETL/ELT pipelines, data engineering, and data infrastructure Track record of delivering successful products in fast-paced environments Experience supporting complex, multi-stakeholder initiatives Proven ability to work with technical teams and translate business requirements into technical product specifications Experience with customer research, user interviews, and data-driven decision making Education Bachelor's degree in computer science, engineering, management information systems, or related technical field required MBA/MS or equivalent experience preferred Annual Salary $88,150.00 - $157,850.00 The above annual salary range is a general guideline. Multiple factors are taken into consideration to arrive at the final hourly rate/ annual salary to be offered to the selected candidate. Factors include, but are not limited to, the scope and responsibilities of the role, the selected candidate's work experience, education and training, the work location as well as market and business considerations. At this time, GEICO will not sponsor a new applicant for employment authorization for this position. The GEICO Pledge: Great Company: At GEICO, we help our customers through life's twists and turns. Our mission is to protect people when they need it most and we're constantly evolving to stay ahead of their needs. We're an iconic brand that thrives on innovation, exceeding our customers' expectations and enabling our collective success. From day one, you'll take on exciting challenges that help you grow and collaborate with dynamic teams who want to make a positive impact on people's lives. Great Careers: We offer a career where you can learn, grow, and thrive through personalized development programs, created with your career - and your potential - in mind. You'll have access to industry leading training, certification assistance, career mentorship and coaching with supportive leaders at all levels. Great Culture: We foster an inclusive culture of shared success, rooted in integrity, a bias for action and a winning mindset. Grounded by our core values, we have an an established culture of caring, inclusion, and belonging, that values different perspectives. Our teams are led by dynamic, multi-faceted teams led by supportive leaders, driven by performance excellence and unified under a shared purpose. As part of our culture, we also offer employee engagement and recognition programs that reward the positive impact our work makes on the lives of our customers. Great Rewards: We offer compensation and benefits built to enhance your physical well-being, mental and emotional health and financial future. Comprehensive Total Rewards program that offers personalized coverage tailor-made for you and your family's overall well-being. Financial benefits including market-competitive compensation; a 401K savings plan vested from day one that offers a 6% match; performance and recognition-based incentives; and tuition assistance. Access to additional benefits like mental healthcare as well as fertility and adoption assistance. Supports flexibility- We provide workplace flexibility as well as our GEICO Flex program, which offers the ability to work from anywhere in the US for up to four weeks per year. The equal employment opportunity policy of the GEICO Companies provides for a fair and equal employment opportunity for all associates and job applicants regardless of race, color, religious creed, national origin, ancestry, age, gender, pregnancy, sexual orientation, gender identity, marital status, familial status, disability or genetic information, in compliance with applicable federal, state and local law. GEICO hires and promotes individuals solely on the basis of their qualifications for the job to be filled. GEICO reasonably accommodates qualified individuals with disabilities to enable them to receive equal employment opportunity and/or perform the essential functions of the job, unless the accommodation would impose an undue hardship to the Company. This applies to all applicants and associates. GEICO also provides a work environment in which each associate is able to be productive and work to the best of their ability. We do not condone or tolerate an atmosphere of intimidation or harassment. We expect and require the cooperation of all associates in maintaining an atmosphere free from discrimination and harassment with mutual respect by and for all associates and applicants.

Posted 30+ days ago

T logo
TP-Link CorpIrvine, CA
ABOUT US: Headquartered in the United States, TP-Link Systems Inc. is a global provider of reliable networking devices and smart home products, consistently ranked as the world's top provider of Wi-Fi devices. The company is committed to delivering innovative products that enhance people's lives through faster, more reliable connectivity. With a commitment to excellence, TP-Link serves customers in over 170 countries and continues to grow its global footprint. We believe technology changes the world for the better! At TP-Link Systems Inc, we are committed to crafting dependable, high-performance products to connect users worldwide with the wonders of technology. Embracing professionalism, innovation, excellence, and simplicity, we aim to assist our clients in achieving remarkable global performance and enable consumers to enjoy a seamless, effortless lifestyle. KEY RESPONSIBILITIES Design and build scalable data pipeline: Develop and maintain high performance and large scale data ingestion and transformation, including ETL/ELT processes, data de-identification, and security management. Data orchestration and automation: Develop and manage automated data workflows using tools like Apache Airflow to schedule pipelines, manage dependencies, and ensure reliable, timely data processing and availability. AWS integration and cloud expertise: Build data pipelines integrated with AWS cloud-native storage and compute services, leveraging scalable cloud infrastructure for data processing. Monitoring and data quality: Implement comprehensive monitoring, logging, and alerting to ensure high availability, fault tolerance and data quality through self healing strategies and robust data validation processes. Technology innovation: Stay current with emerging big data technologies and industry trends, recommending and implementing new tools and approaches to continuously improve data infrastructure. Technical leadership: Provide technical leadership for data infrastructure teams, guide architecture decisions and system design best practices. Mentor junior engineers through code reviews and knowledge sharing, lead complex projects from concept to production, and help to foster a culture of operational excellence. Bilingual Mandarin Required

Posted 1 week ago

T logo
TP-Link Systems Inc.Irvine, CA

$150,000 - $200,000 / year

ABOUT US: Headquartered in the United States, TP-Link Systems Inc. is a global provider of reliable networking devices and smart home products, consistently ranked as the world’s top provider of Wi-Fi devices. The company is committed to delivering innovative products that enhance people’s lives through faster, more reliable connectivity. With a commitment to excellence, TP-Link serves customers in over 170 countries and continues to grow its global footprint. We believe technology changes the world for the better! At TP-Link Systems Inc, we are committed to crafting dependable, high-performance products to connect users worldwide with the wonders of technology. Embracing professionalism, innovation, excellence, and simplicity, we aim to assist our clients in achieving remarkable global performance and enable consumers to enjoy a seamless, effortless lifestyle. KEY RESPONSIBILITIES Design and build scalable data pipeline: Develop and maintain high performance and large scale data ingestion and transformation, including ETL/ELT processes, data de-identification, and security management. Data orchestration and automation: Develop and manage automated data workflows using tools like Apache Airflow to schedule pipelines, manage dependencies, and ensure reliable, timely data processing and availability. AWS integration and cloud expertise: Build data pipelines integrated with AWS cloud-native storage and compute services, leveraging scalable cloud infrastructure for data processing. Monitoring and data quality: Implement comprehensive monitoring, logging, and alerting to ensure high availability, fault tolerance and data quality through self healing strategies and robust data validation processes. Technology innovation: Stay current with emerging big data technologies and industry trends, recommending and implementing new tools and approaches to continuously improve data infrastructure. Technical leadership: Provide technical leadership for data infrastructure teams, guide architecture decisions and system design best practices. Mentor junior engineers through code reviews and knowledge sharing, lead complex projects from concept to production, and help to foster a culture of operational excellence. Bilingual Mandarin Required Requirements REQUIRED QUALIFICATIONS Must be Mandarin and English Bilingual. Experience requirements: 5+ years in data engineering, software engineering, or data infrastructure with proven experience building and operating large scale data pipelines and distributed systems in production, including terabyte scale big data environments. Programming proficiency: Strong Python skills for building data pipelines and processing jobs, with ability to write clean, maintainable, and efficient code. Experience with Git version control and collaborative development workflows required. Distributed systems expertise: Deep knowledge of distributed systems and parallel processing concepts. Proficient in debugging and performance tuning large scale data systems, with understanding of data partitioning, sharding, consistency, and fault tolerance in distributed data processing. Big data frameworks: Strong proficiency in big data processing frameworks such as Apache Spark for batch processing and other relevant batch processing technologies. Database and data warehouse expertise: Strong understanding of relational database concepts and data warehouse principles. Workflow Orchestration: Hands-on experience with data workflow orchestration tools like Apache Airflow or AWS Step Functions for scheduling, coordinating, and monitoring complex data pipelines. Problem solving and collaboration: Excellent problem solving skills with strong attention to detail and ability to work effectively in collaborative team environments. PREFERRED QUALIFICATIONS Advanced degree: Master's degree in Computer Science or related field providing strong theoretical foundation in large scale distributed systems and data processing algorithms. Modern data technology: Exposure to agentic AI patterns, knowledge base systems, and expert systems is a plus. Experience with real-time streaming processing frameworks like Apache Kafka, Apache Flink, Apache Beam, or pub/sub real-time messaging systems is a plus. Advance database and data warehouse expertise: Familiar with diverse database technologies in addition to relational, such as NoSQL, NewSQL, key value, columnar, graph, document, time series databases. Ability to design and optimize schemas/data models for analytics use cases, with experience in modern data storage solutions like data warehouses (Redshift, BigQuery, Databricks, Snowflake). Additional programming languages: Proficiency in additional languages such as Java or Scala is a plus. Cloud and infrastructure expertise: Experience with AWS cloud platforms and hands on skills in infrastructure as code (SDK, CDK, Terraform) and container orchestration (Docker/Kubernetes) for automated environment setup and scaling. Benefits Salary Range: $150,000 - $200,000 Free snacks and drinks, and provided lunch on Fridays Fully paid medical, dental, and vision insurance (partial coverage for dependents) Contributions to 401k funds Bi-annual reviews, and annual pay increases Health and wellness benefits, including free gym membership Quarterly team-building events At TP-Link Systems Inc., we are continually searching for ambitious individuals who are passionate about their work. We believe that diversity fuels innovation, collaboration, and drives our entrepreneurial spirit. As a global company, we highly value diverse perspectives and are committed to cultivating an environment where all voices are heard, respected, and valued. We are dedicated to providing equal employment opportunities to all employees and applicants, and we prohibit discrimination and harassment of any kind based on race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Beyond compliance, we strive to create a supportive and growth-oriented workplace for everyone. If you share our passion and connection to this mission, we welcome you to apply and join us in building a vibrant and inclusive team at TP-Link Systems Inc. Please, no third-party agency inquiries, and we are unable to offer visa sponsorships at this time.

Posted 2 weeks ago

C3 AI logo
C3 AIRedwood City, CA

$123,000 - $185,000 / year

C3 AI (NYSE: AI), is the Enterprise AI application software company. C3 AI delivers a family of fully integrated products including the C3 Agentic AI Platform, an end-to-end platform for developing, deploying, and operating enterprise AI applications, C3 AI applications, a portfolio of industry-specific SaaS enterprise AI applications that enable the digital transformation of organizations globally, and C3 Generative AI, a suite of domain-specific generative AI offerings for the enterprise. Learn more at: C3 AI As a member of the C3 AI Data Science team, you will work with some of the largest companies on the planet to help them build the next generation of AI-powered enterprise applications. You will collaborate directly with data scientists, software engineers, and subject matter experts in defining new AI solutions that provide our customers (c3.ai/customers/) with the information they need to make informed decisions and enable their digital transformation. Your role will involve finding the appropriate machine learning algorithms and implementing them on the C3 AI Platform to ensure they can run at scale. C3 AI Data Scientists are equipped with modern development tools, IDEs, and AI agents to maximize productivity and accelerate solution delivery.   Qualified candidates will have an in-depth knowledge of most common machine learning techniques and their application. You will also understand the limitations of these algorithms and how to tweak them or derive from them to achieve similar results at a large scale. Note: This is a client-facing position which requires travel. Candidates should have the ability and willingness to travel based on business needs. Responsibilities: Designing and deploying Machine Learning algorithms for industrial applications such as predictive maintenance, demand forecasting and process optimization. Collaborating with data and subject matter experts from C3 AI and its customer teams to seek, understand, validate, interpret, and correctly use new data elements. Driving adoption and scalability of Generative AI and Deep Learning systems within C3 AI’s products. Qualifications: MS or PhD in Computer Science, Electrical Engineering, Statistics, or equivalent fields. Applied Machine Learning experience (regression and classification, supervised, self-supervised, and unsupervised learning).  Strong mathematical background (linear algebra, calculus, probability, and statistics). Proficiency in Python and objected-oriented programming, e.g. JavaScript Familiarity with key Python packages for data wrangling, machine learning, and deep learning such as pandas, sklearn, tensorflow, torch, langchain, etc. Ability to drive a project and work both independently and in a cross-functional team. Smart, motivated, can-do attitude, and seeks to make a difference in a fast-paced environment. Excellent verbal and written communication. Ability to travel as needed  Preferred Qualifications: Experience with scalable ML (MapReduce, Spark). Experience in Generative AI, e.g., Large Language Models (LLMs), embedding models, prompt engineering, and fine-tuning. Experience with reinforcement learning. A portfolio of projects (GitHub, papers, etc.). Experience working with modern IDEs and AI agent tools  as part of accelerated development workflows. C3 AI provides excellent benefits, a competitive compensation package and generous equity plan.  California Base Pay Range $123,000 — $185,000 USD C3 AI is proud to be an Equal Opportunity and Affirmative Action Employer. We do not discriminate on the basis of any legally protected characteristics, including disabled and veteran status. 

Posted 30+ days ago

C logo
CrackaJack Digital Solutions LLCPhoenix, AZ
In-Person round of interview mandatory. Tech Stack: Big Data, Spark, Python, SQL, GCP is must. Need a hardcore, heavy hitting Data Engineer who is extremely skilled and is able to function independently and manage their deliverables Capable of writing ETL pipelines using Python from scratch Expert in OOP principles and concepts Ability to independently write efficient and reusable code for ETL pipelines Expert in data modeling concepts such as schemas and entity relationships Expert at analyzing and developing queries in SQL in various dialects (SQL Server, DB2, Oracle) Familiarity with Airflow and understands how to develop DAGs Expert in data warehouses like BigQuery, Databricks Deltalakehouse and how to programmatically ingest, cleanse, govern and report data out of them Expertise in Spark. Powered by JazzHR

Posted 30+ days ago

Care It Services logo
Care It ServicesDallas, Texas

$50 - $60 / hour

Benefits: Company parties Competitive salary Dental insurance Free food & snacks Hi Hope doing good & Well Work location: Dallas, TX /Atlanta, GA End client: IBM/DTV Compltely on site position Job Description We are looking for an experienced Databricks Subject Matter Expert (SME) with expertise in Data Profiling and Data Modeling to join our growing team. In this role, you will be responsible for leveraging Databricks to drive end-to-end data solutions, ensuring data quality, and optimizing data pipelines for performance and scalability. You will also play a pivotal role in designing, implementing, and maintaining data models that align with business requirements and industry best practices. The ideal candidate should have deep experience with the Databricks ecosystem, including Spark, Delta Lake, and other cloud-based data technologies, combined with a strong understanding of data profiling and data modeling concepts. You will collaborate closely with data engineers, data scientists, and business analysts to ensure data integrity, accuracy, and optimal architecture. Skills and Qualifications: Technical Skills: Databricks (including Spark, Delta Lake, and other relevant components) · 8+ years of hands-on experience with Databricks or related technologies. Strong expertise in Data Profiling tools and techniques. Experience in Data Profiling and Data Quality management. · Experience in Data Modeling , including working with dimensional models for analytics. Advanced knowledge of SQL , PySpark , and other scripting languages used within Databricks. Experience with Data Modeling (e.g., relational, dimensional, star schema, snowflake schema). Hands-on experience with ETL/ELT processes , data integration, and data pipeline optimization. Familiarity with cloud platforms (AWS, Azure, Google Cloud) and cloud data storage technologies. Proficiency in Python , Scala , or other programming languages commonly used in Databricks. Experience in Data Governance and Data Quality practices.. · Familiarity with Machine Learning workflows within Databricks is a plus. Thank you venkatesh@careits.com Compensation: $50.00 - $60.00 per hour Who We Are CARE ITS is a certified Woman-owned and operated minority company (certified as WMBE). At CARE ITS, we are the World Class IT Professionals, helping clients achieve their goals. Care ITS was established in 2010. Since then we have successfully executed several projects with our expert team of professionals with more than 20 years of experience each. We are globally operated with our Head Quarters in Plainsboro, NJ, with focused specialization in Salesforce, Guidewire and AWS. We provide expert solutions to our customers in various business domains.

Posted 3 weeks ago

Q logo
QCHI/ LendNation Open CareerLenexa, Kansas
QCHI / LendNation is looking for an experienced Data Analyst / Data Scientist to join our Marketing team. This is an experienced Data Analyst / Data Scientist who can independently drive high-impact analysis and shape strategic decisions. This role is not entry-level. The ideal candidate brings proven experience in online lending analytics, fraud detection, and credit risk performance management—preferably in subprime or alternative financial services. This role owns analytic projects end-to-end: defining objectives, extracting and validating data, performing analysis, presenting findings, and ensuring recommendations are adopted. The individual will partner with underwriting, marketing, product, and external vendors to quantify risk, identify growth opportunities, and improve portfolio performance. Success in this role means translating complex analysis into clear, persuasive recommendations that directly influence strategy, decisioning, and financial outcomes. The right candidate will be: Experienced in credit and fraud analytics , not just reporting Able to challenge assumptions and propose analytical solutions Self-directed, accountable, and comfortable prioritizing multiple workstreams REQUIRED SKILLS and EXPERIENCE: 5+ years analytics experience in online lending, fintech, consumer credit, fraud, or related industry 5+ years hands-on experience using R or Python for data extraction, modeling, automation, and analysis Demonstrated expertise with: Credit risk analysis, segmentation, and underwriting decision support Fraud detection analytics (velocity checks, behavioral/identity risk indicators) Portfolio performance measurement and optimization Online marketing analytics and customer lifecycle performance Strong SQL skills, including SAS Ability to communicate insights visually and verbally to both technical and non-technical audiences (Power BI, Tableau, etc.) Proven ability to drive outcomes and influence leadership through data-based recommendations

Posted 3 weeks ago

Infosys LTD logo

Snowflake Data Architect And Data Vault Modeler

Infosys LTDAlbertville, AL

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

Job Description

Infosys is seeking a hands-on Snowflake Data Architect and Data Vault Modeler. In this role, you will enable digital transformation for our clients in a global delivery model, research on technologies independently, recommend appropriate solutions and contribute to technology-specific best practices and standards. You will be responsible to interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued.to design and implement modern data architectures that enable analytics, AI/ML, and digital transformation.

Required Qualifications:

  • Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
  • Candidate must be located within commuting distance of Albertville, AL or be willing to relocate to the area
  • Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time
  • At least 7 years of experience in Information Technology.
  • At least 5 years of experience in data engineering/architecture roles with strong hands-on expertise.
  • Experience in Data modeling (Data Vault Modeling) and in-depth knowledge on modeling tools like ERWIN
  • Cloud data platforms (Snowflake preferred; Azure/GCP plus)
  • Experience with Snowflake, SnowSQL, Erwin
  • Familiarity with BI tools like Power BI or Tableau.

Preferred Qualifications:

  • Experience with Data Vault Modeling and Enterprise level DWH concepts.
  • Familiarity with ML pipeline integration
  • Certifications: Snowflake Solutions Architect Expert, Data Vault Modeler

The job entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email or face to face. Travel may be required as per the job requirements.

Along with competitive pay, as a full-time Infosys employee you are also eligible for the following benefits :-

  • Medical/Dental/Vision/Life Insurance
  • Long-term/Short-term Disability
  • Health and Dependent Care Reimbursement Accounts
  • Insurance (Accident, Critical Illness , Hospital Indemnity, Legal)
  • 401(k) plan and contributions dependent on salary level
  • Paid holidays plus Paid Time Off

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall