Data Science Jobs 2026 (Now Hiring) – Smart Auto Apply

We've scanned millions of jobs. Simply select your favorites, and we can fill out the applications for you.

DPR Construction logo

Sr. Data Scientist

DPR ConstructionAtlanta, GA
Job Description DPR Construction is seeking a skilled Senior Data Scientist to help advance our data-driven approach to building. In this role, you'll use statistical analysis, machine learning, and data visualization to turn complex construction and business data into actionable insights that improve project planning, cost forecasting, resource management, and safety. Working with project and operations teams, you'll build and deploy scalable, secure data solutions on cloud platforms like Azure and AWS, driving innovation and operational excellence across DPR's projects. Responsibilities Data analysis and modeling: Analyze large datasets to identify trends, bottlenecks, and areas for improvement in operational performance. Build predictive and statistical models to forecast demand, capacity, and potential issues. Develop and deploy models: Build, test, and deploy machine learning and AI models to improve operational processes. Analyze operational data: Examine data related to projects, production, supply chains, inventory, and quality control to identify patterns, trends, and inefficiencies. Optimize processes: Use data-driven insights to streamline workflows, allocate resources more effectively, and improve overall performance. Forecast and predict: Create predictive models to forecast outcomes, such as demand, and inform strategic decisions. Communicate findings: Present findings and recommendations to stakeholders through reports, visualizations, and presentations. Ensure reliability: Build and maintain reliable, scalable, and efficient data science systems and processes. Collaboration: Partner with project managers, engineers, and business leaders to ensure data solutions are aligned with organizational goals and deliver tangible improvements. Continuous Learning: Stay current with advancements in data science and machine learning to continually enhance the company's data capabilities. Reporting and communication: Create dashboards and reports that clearly communicate performance trends and key insights to leadership and other stakeholders. Translate complex data into actionable recommendations. Performance monitoring: Implement data quality checks and monitor the performance of models and automated systems, creating feedback loops for continuous improvement. Experimentation: Design and evaluate experiments to quantify the impact of new systems and changes on operational outcomes. Qualifications Bachelor's or Master's degree in Data Science, Computer Science, Statistics, Engineering, or a related field. 7+ years of experience in data science roles within AEC, product or technology organizations. At least 4 years of experience working with cloud platforms, specifically Azure and AWS, for model deployment and data management. Strong proficiency in Python or R for data analysis, modeling, and machine learning, with experience in relevant libraries (e.g., Scikit-learn, TensorFlow, PyTorch) and NLP frameworks (e.g., GPT, Hugging Face Transformers). Expertise in SQL for data querying and manipulation, and experience with data visualization tools (e.g., Power BI, Tableau). Solid understanding of statistical methods, predictive modeling, and optimization techniques. Expertise in statistics and causal inference, applied in both experimentation and observational causal inference studies. Proven experience designing and interpreting experiments and making statistically sound recommendations. Strategic and impact-driven mindset, capable of translating complex business problems into actionable frameworks. Ability to build relationships with diverse stakeholders and cultivate strong partnerships. Strong communication skills, including the ability to bridge technical and non-technical stakeholders and collaborate across various functions to ensure business impact. Ability to operate effectively in a fast-moving, ambiguous environment with limited structure. Experience working with construction-related data or similar industries (e.g., engineering, manufacturing) is a plus. Preferred Skills Familiarity with construction management software (e.g., ACC, Procore, BIM tools) and knowledge of project management methodologies. Hands-on experience with Generative AI tools and libraries. Background in experimentation infrastructure or human-AI interaction systems. Knowledge of time-series analysis, anomaly detection, and risk modeling specific to construction environments. DPR Construction is a forward-thinking, self-performing general contractor specializing in technically complex and sustainable projects for the advanced technology, life sciences, healthcare, higher education and commercial markets. Founded in 1990, DPR is a great story of entrepreneurial success as a private, employee-owned company that has grown into a multi-billion-dollar family of companies with offices around the world. Working at DPR, you'll have the chance to try new things, explore unique paths and shape your future. Here, we build opportunity together-by harnessing our talents, enabling curiosity and pursuing our collective ambition to make the best ideas happen. We are proud to be recognized as a great place to work by our talented teammates and leading news organizations like U.S. News and World Report, Forbes, Fast Company and Newsweek. Explore our open opportunities at www.dpr.com/careers.

Posted 30+ days ago

DPR Construction logo

Data Engineering Lead- Finance

DPR ConstructionCharlotte, NC
Job Description We are a leading construction company committed to delivering high-quality, innovative projects. Our team integrates cutting-edge technologies into the construction process to streamline operations, enhance decision-making, and drive efficiency across all levels. We are looking for a talented Data Engineer to join our team and contribute to developing robust data solutions that support our business goals. This role is ideal for someone who enjoys combining technical problem-solving with stakeholder collaboration. You will collaborate with business leaders to understand data needs and work closely with a global engineering team to deliver scalable, timely, and high-quality data solutions that power insights and operations. Responsibilities Own data delivery for specific business verticals by translating stakeholder needs into scalable, reliable, and well-documented data solutions. Participate in requirements gathering, technical design reviews, and planning discussions with business and technical teams. Partner with the extended data team to define, develop, and maintain shared data models and definitions. Design, develop, and maintain robust data pipelines and ETL processes using tools like Azure Data Factory and Python across internal and external systems. Proactively manage data quality, error handling, monitoring, and alerting to ensure timely and trustworthy data delivery. Perform debugging, application issue resolution, root cause analysis, and assist in proactive/preventive maintenance. Support incident resolution and perform root cause analysis for data-related issues. Create and maintain both business requirement and technical requirement documentation Collaborate with data analysts, business users, and developers to ensure the accuracy and efficiency of data solutions. Collaborate with platform and architecture teams to align with best practices and extend shared data engineering patterns. Qualifications Minimum of 4 years of experience as a Data Engineer, working with cloud platforms (Azure, AWS). Proven track record of managing stakeholder expectations and delivering data solutions aligned with business priorities. Strong hands-on expertise in Azure Data Factory, Azure Data Lake, Python, and SQL Familiarity with cloud storage (Azure, AWS S3) and integration techniques (APIs, webhooks, REST). Experience with modern data platforms like Snowflake and Microsoft Fabric. Solid understanding of Data Modeling, pipeline orchestration and performance optimization Strong problem-solving skills and ability to troubleshoot complex data issues. Excellent communication skills, with the ability to work collaboratively in a team environment. Familiarity with tools like Power BI for data visualization is a plus. Experience working with or coordinating with overseas teams is a strong plus Preferred Skills Knowledge of Airflow or other orchestration tools. Experience working with Git-based workflows and CI/CD pipelines Experience in the construction industry or a similar field is a plus but not required. DPR Construction is a forward-thinking, self-performing general contractor specializing in technically complex and sustainable projects for the advanced technology, life sciences, healthcare, higher education and commercial markets. Founded in 1990, DPR is a great story of entrepreneurial success as a private, employee-owned company that has grown into a multi-billion-dollar family of companies with offices around the world. Working at DPR, you'll have the chance to try new things, explore unique paths and shape your future. Here, we build opportunity together-by harnessing our talents, enabling curiosity and pursuing our collective ambition to make the best ideas happen. We are proud to be recognized as a great place to work by our talented teammates and leading news organizations like U.S. News and World Report, Forbes, Fast Company and Newsweek. Explore our open opportunities at www.dpr.com/careers.

Posted 30+ days ago

P logo

Sr Data Analyst

PatientPoint, IncCincinnati, OH
Join PatientPoint to be part of a dynamic team creating change in and around the doctor's office. As a leading digital health company, we innovate to positively impact patient behaviors. Our purpose-driven approach offers an inspirational career opportunity where you can contribute to improving health outcomes for millions of patients nationwide. Location: Cincinnati Schedule: Hybrid Schedule We are seeking a highly skilled Senior Data Analyst to join our team and drive data-driven decision-making across the organization. The ideal candidate will have a strong analytical mindset, expertise in data visualization, and the ability to transform complex data sets into meaningful insights. As a Senior Data Analyst, you will collaborate with cross-functional teams to optimize business strategies, improve processes, and enhance overall performance through data analysis. What You'll Do Analyze large and complex datasets to identify trends, patterns, and insights that drive business decisions. Communicate findings effectively to stakeholders primarily using Looker to ensure information is automated and not created manually. Work closely with business leaders to understand objectives and provide actionable recommendations based on data analysis. Evolve from creating descriptive reports that monitor metrics and outcomes to utilizing data proactively to uncover opportunities for improvement across the organization. Design and implement data models, ensuring data integrity, accuracy, and consistency across multiple sources. Utilize advanced statistical techniques and leverage machine learning models to solve business challenges. Collaborate with IT, engineering, product, and business stakeholders to optimize data collection and pipeline processes. Mentor and guide junior analysts, fostering a data-driven culture within the organization. Stay updated with industry trends, emerging technologies, and best practices in data analytics and business intelligence. Quickly familiarize yourself with new datasets and efficiently provide valuable insights. What We Need Bachelor's degree in Data Science, Statistics, Mathematics, Computer Science, Business Analytics, or a related intense quantitative field 5+ years of experience in analytics, data management, business intelligence or a related field Proficiency in SQL for querying and managing large datasets. Experience with data visualization tools such as Looker, Tableau, or Power BI. Strong understanding of data modeling, statistical analysis and data visualization Proficiency in programming languages such as Python (preferred) or R for data analysis and automation. Experience with cloud-based data platforms (e.g. Snowflake, GCP, AWS, Azure) Excellent communication and presentation skills, with the ability to convey complex data concepts to non-technical stakeholders. Desired Qualifications Master's degree in Data Science, Statistics, Mathematics, Computer Science, Business Analytics, or a related intense quantitative field What You'll Need to Succeed Strong ability to translate analytical concepts and robust data storytelling skills to communicate insights to various audiences, including non-technical business partners Strong problem-solving skills and a results-driven approach. Excellent time management skills and keen attention to detail. #LI-ED1 #LI-Hybrid About PatientPoint: PatientPoint is the Point of Change company, transforming the healthcare experience through the strategic delivery of behavior-changing content at critical moments of care. As the nation's largest and most impactful digital network in 30,000 physician offices, we connect patients, providers and health brands with relevant information that is proven to drive healthier decisions and better outcomes. Learn more at patientpoint.com. Latest News & Innovations: Named A Best Place to Work! Read More Mike Walsh, COO answers "What Makes a Great Leader". Read More Recognized on Vault's Top Internship List. Read More What We Offer: We know you bring your whole self to work every day, and we are committed to supporting our full-time teammates with a comprehensive range of modernized benefits and cultural perks. We offer competitive compensation, flexible time off to recharge, hybrid work options, mental and emotional wellness resources, a 401K plan, and more. While these benefits are available to full-time team members, we strive to create a positive and supportive environment for all teammates. PatientPoint recognizes that privacy is important to you. Please read the PatientPoint privacy policy, we want you to be familiar with how we may collect, use, and disclose your information. Employer is EOE/M/F/D/V

Posted 30+ days ago

S logo

Machine Learning Research Engineer, Agent Data Foundation - Enterprise Genai

Scale AI, Inc.San Francisco, CA

$252,000 - $315,000 / year

AI is becoming vitally important in every function of our society. At Scale, our mission is to accelerate the development of AI applications. For 9 years, Scale has been the leading AI data foundry, helping fuel the most exciting advancements in AI, including generative AI, defense applications, and autonomous vehicles. With our recent investment from Meta, we are doubling down on building out state of the art post-training algorithms to reach the performance necessary for complex agents in enterprises around the world. The Enterprise ML Research Lab works on the front lines of this AI revolution. We are working on an arsenal of proprietary research, tools, and resources that serve all of our enterprise clients. As MLRE on the Data Foundation team, you'll work on cutting edge research to define the data flywheel that makes the whole machine move. This includes research around synthetic environments from task definitions, building agents for trace analysis, and contributing to a cutting edge framework that automatically hill-climbs agent-building from an eval set. This will involve creating best-in-class Agents that achieve state of the art results through a combination of post-training + agent-building algorithms. If you are excited about shaping the future of the modern GenAI movement, we would love to hear from you! You will: Build synthetic data pipelines to generate enterprise environments to use for RL post-training Create agents to convert traces from production into actionable insights to use to improve agents Contribute to our agent building product which can construct other agents using coding agents + proprietary algorithms Train state of the art models, developed both internally and from the community, to deploy to our enterprise customers. Ideally you'd have: 3+ years of building with LLMs in a production environment Clear experiences with constructing high quality data to use to improve an LLM/Agent Publications in top conferences such as NEURIPS, ICLR, or ICML within the last two years PhD or Masters in Computer Science or a related field Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You'll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend. Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $252,000-$315,000 USD PLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status. We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision. PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants' needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.

Posted 30+ days ago

PwC logo

AI & Genai Data Scientist-Senior Associate

PwCIrvine, CA

$77,000 - $202,000 / year

Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn't clear, you ask questions, and you use these moments as opportunities to grow. Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Minimum Degree Required Bachelor's Degree Minimum Year(s) of Experience 4 year(s) Demonstrates thorough-level abilities and/or a proven record of success managing the identification and addressing of client needs: Building of GenAI and AI solutions, including but not limited to analytical model development and implementation, prompt engineering, general all-purpose programming (e.g., Python), testing, communication of results, front end and back-end integration, and iterative development with clients Documenting and analyzing business processes for AI and Generative AI opportunities, including gathering of requirements, creation of initial hypotheses, and development of GenAI and AI solution approach Collaborating with client team to understand their business problem and select the appropriate analytical models and approaches for AI and GenAI use cases Designing and solutioning AI/GenAI architectures for clients, specifically for plugin-based solutions (i.e., ChatClient application with plugins) and custom AI/GenAI application builds Processing unstructured and structured data to be consumed as context for LLMs, including but not limited to embedding of large text corpus, generative development of SQL queries, building connectors to structured databases Support management of daily operations of a global data and analytics team on client engagements, review developed models, provide feedback and assist in analysis; Directing data engineers and other data scientists to deliver efficient solutions to meet client requirements; Leading and contributing to development of proof of concepts, pilots, and production use cases for clients while working in cross-functional teams; Structuring, write, communicate and facilitate client presentations; and, Directing associates through coaching, providing feedback, and guiding work performance. Demonstrates thorough abilities and/or a proven record of success learning and performing in functional and technical capacities, including the following areas: Managing AI/GenAI application development teams including back-end and front-end integrations Using Python (e.g., Pandas, NLTK, Scikit-learn, Keras etc.), common LLM development frameworks (e.g., Langchain, Semantic Kernel), Relational storage (SQL), Non-relational storage (NoSQL); Experience in analytical techniques such as Machine Learning, Deep Learning and Optimization Vectorization and embedding, prompt engineering, RAG (retrieval, augmented, generation) workflow dev Understanding or hands on experience with Azure, AWS, and / or Google Cloud platforms Experience with Git Version Control, Unit/Integration/End-to-End Testing, CI/CD, release management, etc. Travel Requirements Up to 80% Job Posting End Date Learn more about how we work: https://pwc.to/how-we-work PwC does not intend to hire experienced or entry level job seekers who will need, now or in the future, PwC sponsorship through the H-1B lottery, except as set forth within the following policy: https://pwc.to/H-1B-Lottery-Policy . As PwC is an equal opportunity employer, all qualified applicants will receive consideration for employment at PwC without regard to race; color; religion; national origin; sex (including pregnancy, sexual orientation, and gender identity); age; disability; genetic information (including family medical history); veteran, marital, or citizenship status; or, any other status protected by law. For only those qualified applicants that are impacted by the Los Angeles County Fair Chance Ordinance for Employers, the Los Angeles' Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, San Diego County Fair Chance Ordinance, and the California Fair Chance Act, where applicable, arrest or conviction records will be considered for Employment in accordance with these laws. At PwC, we recognize that conviction records may have a direct, adverse, and negative relationship to responsibilities such as accessing sensitive company or customer information, handling proprietary assets, or collaborating closely with team members. We evaluate these factors thoughtfully to establish a secure and trusted workplace for all. Applications will be accepted until the position is filled or the posting is removed, unless otherwise set forth on the following webpage. Please visit this link for information about anticipated application deadlines: https://pwc.to/us-application-deadlines The salary range for this position is: $77,000 - $202,000. Actual compensation within the range will be dependent upon the individual's skills, experience, qualifications and location, and applicable employment laws. All hired individuals are eligible for an annual discretionary bonus. PwC offers a wide range of benefits, including medical, dental, vision, 401k, holiday pay, vacation, personal and family sick leave, and more. To view our benefits at a glance, please visit the following link: https://pwc.to/benefits-at-a-glance

Posted 3 weeks ago

Hebbia logo

Frontend Engineer, Growth And Data

HebbiaNew York City; San Francisco, CA

$160,000 - $300,000 / year

About Hebbia The AI platform for investors and bankers that generates alpha and drives upside. Founded in 2020 by George Sivulka and backed by Peter Thiel and Andreessen Horowitz, Hebbia powers investment decisions for BlackRock, KKR, Carlyle, Centerview, and 40% of the world's largest asset managers. Our flagship product, Matrix, delivers industry-leading accuracy, speed, and transparency in AI-driven analysis. It is trusted to help manage over $15 trillion in assets globally. We deliver the intelligence that gives finance professionals a definitive edge. Our AI uncovers signals no human could see, surfaces hidden opportunities, and accelerates decisions with unmatched speed and conviction. We do not just streamline workflows. We transform how capital is deployed, how risk is managed, and how value is created across markets. Hebbia is not a tool. Hebbia is the competitive advantage that drives performance, alpha, and market leadership. The Team The Growth and Data team at Hebbia is the engine that fuels the entire AI platform with the data it needs to reason and retrieve. We're responsible for sourcing, indexing, and enriching content from every corner of the knowledge universe - spanning private enterprise data, public data, and third-party platforms - and delivering it seamlessly to power user workflows and agentic research. We build robust integrations with platforms like Snowflake, S3, SharePoint, Dropbox, and beyond, enabling organizations to securely unify their data ecosystems. From discovery and search to retrieval-augmented deep research within chat matrix frameworks, the data we deliver quite literally powers every part of the AI platform. The Role As a Frontend Software Engineer on Hebbia's Growth and Data team, you will design, develop, and deliver engaging user experiences that unlock unique, high-value opportunities for our customers and drive Hebbia's growth. You will build innovative frontend interfaces, creating personalized views into unique datasets, and powerful tools enabling users to fully harness the potential of Hebbia's platform. By collaborating closely with cross-functional teams-including product, design, and customer success-you'll own critical product experiences from ideation to launch. Your work will directly influence customer adoption, retention, and satisfaction, as you craft exceptional features that clearly showcase Hebbia's value. Your creativity, technical expertise, and user-centric approach will help fuel Hebbia's expansion and shape the next generation of our platform. Responsibilities Be an owner. We are a small and growing team relying on each and every engineer to provide significant coverage. Build without prior art. Agentic interfaces do not have templates or pre-existing guidelines to rinse and repeat - you will be setting the standard for an industry. Own critical system components. Take complex requirements and turn them into robust, scaled solutions that solve real customer needs. Unlock meaningful customer value. Our customers aren't using Hebbia for the gimmick - they unlock market moving value. You will be responsible for unlocking this Alpha. Who You Are Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field. A strong academic background with coursework in data structures, algorithms, and software development is preferred. 5+ years software development experience at a venture-backed startup or top technology firm. Proficiency in React, Typescript, and popular frontend frameworks for component design and state management. Deep expertise in one or multiple: full-stack development, frontend engineering, web development, UX/UI engineering. Knowledge of cloud platforms and application deployment platforms (e.g., AWS, Vercel) Demonstrated awareness of best practices in UI architecture and development Problem-solving and analytical skills: Ability to analyze complex problems, propose innovative solutions, and effectively communicate technical concepts to both technical and non-technical stakeholders. Leadership and teamwork: Proven experience in leading software development projects and collaborating with cross-functional teams. Strong interpersonal and communication skills to foster a collaborative and inclusive work environment. Continuous learning mindset: Enthusiasm for continuous learning and professional growth. A passion for exploring new technologies, frameworks, and software development methodologies. Embraces rapid prototyping with an emphasis on user feedback Autonomous and excited about taking ownership over major initiatives. Compensation The salary range for this role is $160,000 to $300,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate's experience and qualifications. Adjustments outside of this range may be considered for candidates whose qualifications significantly differ from those outlined in the job description. Life @ Hebbia PTO: Unlimited Insurance: Medical + Dental + Vision+ 401K Eats: Catered lunch daily + doordash dinner credit if you ever need to stay late Parental leave policy: 3 months non-birthing parent, 4 months for birthing parent Fertility benefits: $15k lifetime benefit New hire equity grant: competitive equity package with unmatched upside potential #LI-Onsite

Posted 30+ days ago

Louisiana State University logo

Data Warehouse Developer 1

Louisiana State UniversityBaton Rouge, LA
All Job Postings will close at 12:01a.m. CST (1:01a.m. EST) on the specified Closing Date (if designated). If you close the browser or exit your application prior to submitting, the application progress will be saved as a draft. You will be able to access and complete the application through "My Draft Applications" located on your Candidate Home page. Job Posting Title: Data Warehouse Developer 1 Position Type: Professional / Unclassified Department: LSUAM FA - ITS - DA - PS - Business Intelligence (Anthony Joseph Alfonso (00012319)) Work Location: 0202 Fred C. Frey Computing Services Building Pay Grade: Professional Job Description: Data warehouse design, configuration, and support for mission-critical enterprise data warehouse and reporting. Familiar with and responsible for documenting concepts, practices, and procedures. Design, implement, test, and debug complex queries to extract, calculate, transform, and load data. Design, implement, and test data schemas. Relies on experience and judgment to plan and accomplish goals. Performs a variety of tasks. A certain degree of creativity and latitude is expected. All Information Technology Services employees are expected to demonstrate a commitment to exemplary customer service in all facets of their work. Job Responsibilities: DATA WAREHOUSE DEVELOPMENT: Work with IT personnel, central administrative staff, and end users to conduct data analysis and validation. Design, implement, test, debug complex queries to extract, calculate, transform, and load data to data warehouse. Follow programming design principles and best practices. Comply with software quality assurance standards, business goals, and project timelines in an agile delivery environment. Monitor release/upgrade notifications and validate related warehouse functionality. Document and share changes with colleagues. (35%) DATA AND PROCESS DESIGN: Work with IT data modelers, central administrative staff, and end users to design data warehouse data models, data sets, and data definitions for enterprise business intelligence (EBI) reporting and for use with data visualization technology solutions.(30%) DATA WAREHOUSE ADMINISTRATION / SYSTEM TECHNICAL SUPPORT: Manage daily tickets and cyclical (release/upgrades) administration of data warehouse solutions in collaboration with other IT personnel, campus partners, and the campus community at large. Ensure the overall health of the technology platform. Reduce operational complexity. Design and implement system security. Perform capacity planning and management. Serve as escalation point for production and/or platform issues. Coordinate and communicate system-related events. . (25%) Other duties as assigned. (10%) Minimum Qualifications: Bachelor's degree with 1 year of professional work experience. + LSU values skills, experience, and expertise. Candidates who have relevant experience in key job responsibilities are encouraged to apply- a degree is not required as long as the candidate meets the required years of experience specified in the job description. Preferred Qualifications: Master's degree in Business, Mathematics, Statistics, Engineering, Science, Management Information Systems, Computer Science, or related field. Experience in the following: Data warehouse administration. ETL for business reporting needs. Data warehouse design, configuration, and support. Understanding of Relational databases. Understanding of SQL language. Examples of Work: Assist with data warehouse administration tasks. Translates business questions and requirements into ETL (extraction, transformation, and load) and/or data warehouse processes, defining and capturing metadata and rules associated with ETL and/or data warehouse processes. Researches and understands source and target data structures, ETL and/or data warehouse processes, and products. Maps source system data to target system data model. Develops ETL and/or data warehouse processes using organizational ETL and/or data warehouse tools. Complies with BI standard operating procedures and institutional data management policies. Demonstrates effective knowledge of ETL and/or data warehouse tool architectures, functions, features, and programming languages. Provides accurate results that satisfy functional requirements. Adheres to organization's design policies including maintaining current documentation. Provides troubleshooting for ETL and/or data warehouse artifacts. Collaborates with subject matter experts (SMEs), functional owners, and other ITS staff. Additional Job Description: Special Instructions: A copy of your transcript(s) may be attached to your application (if available). However, original transcripts are required prior to hire. Please provide three professional references including name, title, phone number and e-mail address. An offer of employment is contingent on a satisfactory pre-employment background check. For questions or concerns regarding the status of your application or salary ranges, please contact Mary Bordelon at mbordelon@lsu.edu. Posting Date: November 20, 2025 Closing Date (Open Until Filled if No Date Specified): March 20, 2026 Additional Position Information: Background Check- An offer of employment is contingent on a satisfactory pre-employment background check. Benefits- LSU offers outstanding benefits to eligible employees and their dependents including health, life, dental, and vision insurance; flexible spending accounts; retirement options; various leave options; paid holidays; wellness benefits; tuition exemption for qualified positions; training and development opportunities; employee discounts; and more! Positions approved to work outside the State of Louisiana shall be employed through Louisiana State University's partner, nextSource Workforce Solutions, for Employer of Record Services including but not limited to employment, benefits, payroll, and tax compliance. Positions employed through Employer of Record Services will be offered benefits and retirement as applicable through their provider and will not be eligible for State of Louisiana benefits and retirement. Essential Position (Y/N): LSU is an Equal Opportunity Employer. All candidates must have valid U.S. work authorization at the time of hire and maintain that valid work authorization throughout employment. Changes in laws, regulations, or government policies may impact the university's ability to employ individuals in certain positions. HCM Contact Information: For questions or concerns related to updating your application with attachments (e.g., resumes, RS:17 documents), date of birth, or reactivating applications, please contact the LSU Human Resources Management Office at 225-578-8200 or email HR@lsu.edu. For questions or concerns regarding the status of your application or salary ranges, please contact the department using the information provided in the Special Instructions section of this job posting.

Posted 30+ days ago

KBR logo

Sigint Assessments & Data Analytics Engineer

KBRChantilly, VA
Title: SIGINT Assessments & Data Analytics Engineer Belong. Connect. Grow. with KBR! KBR's National Security Solutions team provides high-end engineering and advanced technology solutions to our customers in the intelligence and national security communities. In this position, your work will have a profound impact on the country's most critical role - protecting our national security. KBR is seeking an Engineer to support constellation design and planning studies, requirements development and verification, and compliance analyses against user requirements. Additionally, the selected Engineer will provide programmatic support directly to Government Leads. Why Join Us? Innovative Projects: KBR's work is at the forefront of engineering, logistics, operations, science, program management, mission IT and cybersecurity solutions. Collaborative Environment: Be part of a dynamic team that thrives on collaboration and innovation, fostering a supportive and intellectually stimulating workplace. Impactful Work: Your contributions will be pivotal in designing and optimizing defense systems that ensure national security and shape the future of space defense. Key Responsibilities: Support development of models and simulations to predict expected mission performance against requirements. Support development of detailed test objectives, plans, and success criteria to support analysis objectives. Conduct engineering analysis in support of system impact assessments and evaluation of results for complex aerospace systems. Work with multiple Teams and stakeholders in the development of analysis plans, analyzing data and reporting results. Coordinate system level analysis across the enterprise. Help the customer establish priorities and engage with various internal and external stakeholders to include senior executive officials, military officers, development contractors, and end users. Develop briefings and assessments to a variety of audiences and must be able to convey information in a clear and articulate manner to customer leadership. Work Environment: Location: On-site Travel Requirements: 10-20% within the United States Working Hours: Standard Qualifications: Required: Active/Current TS/SCI with Poly Bachelor's degree in Math, Physics, Electrical Engineering, or similar technical field. 10 or more years of relative systems engineering experience. Astronautics or Aerospace Engineering experience. Experience in the analysis of data from space system components with the ability to conduct performance analysis and other technical assessments of the data. Familiarity with signal processing, digital communications, and RF Theory. Knowledge and proficiency in conducting engineering analysis and modeling using tools such as Python, MATLAB. Simulink, Satellite Tool Kit (STK), Excel and other standard analysis tools. Ability to clearly and concisely present analysis data. Modeling and Simulation. Belong, Connect and Grow at KBR At KBR, we are passionate about our people and our Zero Harm culture. These inform all that we do and are at the heart of our commitment to, and ongoing journey toward being a People First company. That commitment is central to our team of team's philosophy and fosters an environment where everyone can Belong, Connect and Grow. We Deliver - Together. KBR is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, disability, sex, sexual orientation, gender identity or expression, age, national origin, veteran status, genetic information, union status and/or beliefs, or any other characteristic protected by federal, state, or local law.

Posted 30+ days ago

Filevine logo

Sr. Revenue Operations Data Analyst

FilevineChicago, IL

$100,000 - $120,001 / year

Filevine is forging the future of legal work with cloud-based workflow tools. We have a reputation for intuitive, streamlined technology that helps professionals manage their organization and serve their clients better. We're also known for our team of extraordinary and passionate professionals who love working together to help organizations thrive. Our success has catapulted Filevine to the forefront of our field-we are ranked as one of the most innovative and fastest-growing technology companies in the country by both Deloitte and Inc. Our Mission Filevine is building the seamless intersection between legal and business by creating a world- class platform to help professionals scale. Role Summary: The Sr. Revenue Operations Data Analyst will be responsible for driving operational efficiency and business intelligence across the organization. This role involves managing monthly enterprise reporting and billing cycles, developing and maintaining BI dashboards, and optimizing workflows through automation tools. The ideal candidate will leverage data to support cross-functional teams including Product, Operations, and Marketing, while also leading technical initiatives in SQL and low-code environments. Key Responsibilities: Reporting & Billing Operations- Execute enterprise monthly reporting for key client accounts. Manage enterprise monthly billing processes, including troubleshooting data with engineering and modifying processes based on enterprise requirements. Oversee the transition of billing tools and processes for major clients. Business Intelligence & Data Analysis- Develop and maintain business intelligence dashboards in DOMO for high-level analytics. Utilize Metabase for near real-time monitoring, scrappy queries, and operational dashboards. Perform ad-hoc analysis to support Revenue Operations, Product, and Marketing teams using product usage data. Workflow Automation & Process Enhancement- Build and maintain automated workflows and notifications using Retool and Zapier (e.g., Missing Notice Workflow, Event Notifications).Develop and maintain internal applications such as the Services Summary Generator. Manage list building and data synchronization between Salesforce and application data. Data Engineering & Management- Lead DBT development and maintenance, organizing and processing raw application data into BI-ready formats.Execute Sendgrid single sends for business communication to existing customers. Skills & Qualifications: SQL: Expert level proficiency required (Critical). Business Intelligence (BI): Strong experience with platforms like DOMO and Metabase. Low-Code Automation: Proven experience with tools such as Retool and Zapier. Data Build Tool (DBT): Experience with DBT for data transformation. Salesforce: Working knowledge of Salesforce for data integration and list building. Marketing Tools: Familiarity with tools like SendGrid is a plus. $100,000 - $120,001 a year Compensation Information: $100,000-120,000 The base salary range represents the low and high end of the salary range for this position. The total compensation package for this position will be determined by each individual's location, qualifications, education, work experience, skills and performance. We believe in the importance of pay equity - the range listed is just one component of Filevine's total compensation package for employees. This position is also eligible for a paid time off policy, as well as a comprehensive benefits package. Cool Company Benefits: A dynamic, rapidly growing company, focused on helping organizations thrive Medical, Dental, & Vision Insurance (for full-time employees) Competitive & Fair Pay Maternity & paternity leave (for full-time employees) Short & long-term disability Opportunity to learn from a dedicated leadership team Centrally located open office building in Sugar House (onsite employees) Top-of-the-line company swag Privacy Policy Notice Filevine will handle your personal information according to what's outlined in our Privacy Policy. Communication about this opportunity, or any open role at Filevine, will only come from representatives with email addresses using "filevine.com". Other addresses reaching out are not affiliated with Filevine and should not be responded to. We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

Posted 2 weeks ago

KION Group logo

Principal Data Architect

KION GroupWauwatosa, WI

$134,250 - $179,000 / year

We are seeking a dynamic and highly skilled Principal Data architect who has extensive experience building enterprise scale data platforms to lead these foundational efforts. This role demands someone who not only possesses a profound understanding of the data engineering landscape but is also at the forefront of their game. The ideal candidate will contribute significantly to platform development, leading several data engineering teams with diverse skillset while also being very hands-on coding and actively shaping the future of our data ecosystem. We offer: Career Development Competitive Compensation and Benefits Pay Transparency Global Opportunities Learn More Here: https://www.dematic.com/en-us/about/careers/what-we-offer Dematic provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. The base pay range for this role is estimated to be $134,250.00 - $179,000.00 at the time of posting. Final compensation will be determined by various factors such as work location, education, experience, knowledge and skills. Tasks and Qualifications: This is What You Will do in This Role: As the sole hands-on enterprise data architect, you will be responsible for ideation, architecture, design and development of new enterprise data platform. You will collaborate with other cloud and security architects to ensure seamless alignment within our overarching technology strategy. Architect and design core components with a microservices architecture, abstracting platform, and infrastructure intricacies. Create and maintain essential data platform SDKs and libraries, adhering to industry best practices. Design and develop connector frameworks and modern connectors to source data from disparate systems both on-prem and cloud. Design and optimize data storage, processing, and querying performance for large-scale datasets using industry best practices while keeping costs in check. Architect and design the best security patterns and practices Design and develop data quality frameworks and processes to ensure the accuracy and reliability of data. Collaborate with data scientists, analysts, and cross functional teams to design data models, database schemas and data storage solutions. Design and develop advanced analytics and machine learning capabilities on the data platform. Design and develop observability and data governance frameworks and practices. Stay up to date with the latest data engineering trends, technologies, and best practices. Drive the deployment and release cycles, ensuring a robust and scalable platform. What We are Looking For: 15+ of proven experience in modern data engineering, broader data landscape experience and exposure and solid software engineering experience. Prior experience architecting and building successful enterprise scale data platforms in a green field environment is a must. Proficiency in building end to end data platforms and data services in GCP is a must. Proficiency in tools and technologies: BigQuery, SQL, Python, Spark, DBT, Airflow, Kafka, Kubernetes, Docker. Solid experience designing and developing distributed microservices based data architectures. Proficiency in architecting and designing and development experience with batch and real time streaming infrastructure and workloads. Proficiency with IoT architectures. Solid experience with architecting and implementing metadata management including data catalogues, data lineage, data quality and data observability for big data workflows. Hands-on experience with GCP ecosystem and data lakehouse architectures. Strong experience with container technologies such as Docker, Kubernetes. Strong understanding of data modeling, data architecture, and data governance principles. Excellent experience with DataOps principles and test automation. Excellent experience with observability tooling: Grafana, Datadog. Previous experience working with engineers of all levels - Principal, Senior and Junior What Will Set You Apart : Experience with Data Mesh architecture. Experience building Semantic layers for data platforms. Experience building scalable IoT architectures #LI-DP1

Posted 1 week ago

Parsons Commercial Technology Group Inc. logo

Data Team Lead - Ts/Sci W/Poly

Parsons Commercial Technology Group Inc.Annapolis Junction, MD

$128,700 - $231,700 / year

In a world of possibilities, pursue one with endless opportunities. Imagine Next! At Parsons, you can imagine a career where you thrive, work with exceptional people, and be yourself. Guided by our leadership vision of valuing people, embracing agility, and fostering growth, we cultivate an innovative culture that empowers you to achieve your full potential. Unleash your talent and redefine what's possible. Job Description: Parsons is seeking a highly skilled and experienced Data Team Lead to oversee our Information System Security Officer (ISSO) and Information System Security Engineer (ISSE) functions, manage the overall cloud data architecture, and lead a team of system engineers focused on acquiring sensitive cyber data feeds. This role is crucial in ensuring the secure and efficient ingestion of hundreds of data feeds across multiple data fabrics to support our project objectives. The ideal candidate will have a strong background in data architecture, cybersecurity, and team leadership, with a proven ability to decompose stakeholder data requirements and drive the team to meet project deadlines. What You'll Be Doing: Maintain a robust cloud data architecture that supports sensitive data providers across the DoD and IC and rapid, secure data acquisition at scale. Work closely with stakeholders to understand and translate their data requirements into actionable plans, ensuring that the team is aligned and on schedule. This role requires exceptional leadership and communication skills, as well as expertise in cloud technologies, data security protocols, and project management. Adept at problem-solving, capable of managing multiple priorities, and committed to fostering a collaborative and high-performing team environment. Possess a strong understanding of big data systems to evaluate and optimize data workflows, ensuring the integrity and security of data throughout the ingestion process. Experience with data governance, compliance standards, and cybersecurity best practices is essential. The ability to mentor and guide team members, coupled with a strategic vision for data management, will be key to driving the success of our data initiatives and supporting the overall goals of the project. What Required Skills You'll Bring: Must have a bachelor's degree, a minimum of ten years of experience with big data cloud systems, data governance, and experience leading data efforts Active TS/SCI w/Poly What Desired Skills You'll Bring: Experience with NC3 mission space AWS Cloud certifications Security Clearance Requirement: An active Top Secret SCI w/Polygraph security clearance is required for this position. This position is part of our Federal Solutions team. The Federal Solutions segment delivers resources to our US government customers that ensure the success of missions around the globe. Our intelligent employees drive the state of the art as they provide services and solutions in the areas of defense, security, intelligence, infrastructure, and environmental. We promote a culture of excellence and close-knit teams that take pride in delivering, protecting, and sustaining our nation's most critical assets, from Earth to cyberspace. Throughout the company, our people are anticipating what's next to deliver the solutions our customers need now. Salary Range: $128,700.00 - $231,700.00 We value our employees and want our employees to take care of their overall wellbeing, which is why we offer best-in-class benefits such as medical, dental, vision, paid time off, 401(k), life insurance, flexible work schedules, and holidays to fit your busy lifestyle! Parsons is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, veteran status or any other protected status. We truly invest and care about our employee's wellbeing and provide endless growth opportunities as the sky is the limit, so aim for the stars! Imagine next and join the Parsons quest-APPLY TODAY! Parsons is aware of fraudulent recruitment practices. To learn more about recruitment fraud and how to report it, please refer to https://www.parsons.com/fraudulent-recruitment/ .

Posted 4 weeks ago

CACI International Inc. logo

Data Scientist / Senior AI Machine Learning Research Engineer

CACI International Inc.Sterling, VA

$98,500 - $206,800 / year

Job Title: Data Scientist / Senior AI Machine Learning Research Engineer Job Category: Science Time Type: Full time Minimum Clearance Required to Start: Top Secret Employee Type: Regular Percentage of Travel Required: Up to 10% Type of Travel: Continental US Anticipated Posting End: 1/1/2100 The Opportunity: CACI has an exciting new opportunity for a Senior AI and Machine Learning Research Engineer. Apply machine learning, statistics, to develop algorithms to solve challenging problems, signal processing, and computer networking domains. In this role, you'll leverage your strong foundation in machine learning, data science, and signal processing to solve complex challenges in the RF domain. Responsibilities: Strong mathematical foundation in statistics, linear algebra, and calculus with demonstrated ability to understand and implement machine learning algorithms from first principles rather than solely relying on pre-built libraries. Proficiency in designing and building data pipelines, including experience with ETL processes, data warehousing solutions, and optimizing workflows for large-scale data processing. Hands-on experience with cloud-based infrastructure (e.g., AWS, Azure, GCP) for deploying ML solutions, including containerization, orchestration, and CI/CD pipelines for model deployment. Programming expertise in Python and SQL, with experience using data engineering frameworks (e.g., Spark, Airflow) and ML libraries (e.g., TensorFlow, PyTorch, scikit-learn). Demonstrated experience in establishing ML governance practices, including version control for datasets and models, experiment tracking, model monitoring, and implementing reproducible research principles. Qualifications: Required: Master's degree in quantitative field with mathematical underpinnings and at least 4 years' experience. Experience developing models,. Strong background in machine learning, mathematics and statistics. Comfortable using Linux operating systems and commonly used Linux utilities. Must be a US Citizen with the ability to obtain, maintain and/or transfer the required security clearance as dictated by the contract Must have active Top Secret Clearance. Desired: Ph.D. in computer science, computer engineering, or machine learning, Statistics, applied mathematics or Physics. Experience applying machine learning to signal processing and/or other time-series data analysis applications. Knowledge of or experience with information theory, probability theory, parametric and non-parametric statistical tests. Familiarity with concepts and techniques associated with adversarial AI and AI/ML assurance. Active Top Secret/SCI clearance preferred. _ ____ What You Can Expect: A culture of integrity. At CACI, we place character and innovation at the center of everything we do. As a valued team member, you'll be part of a high-performing group dedicated to our customer's missions and driven by a higher purpose - to ensure the safety of our nation. An environment of trust. CACI values the unique contributions that every employee brings to our company and our customers - every day. You'll have the autonomy to take the time you need through a unique flexible time off benefit and have access to robust learning resources to make your ambitions a reality. A focus on continuous growth. Together, we will advance our nation's most critical missions, build on our lengthy track record of business success, and find opportunities to break new ground - in your career and in our legacy. Your potential is limitless. So is ours. _ ____ Pay Range: There are a host of factors that can influence final salary including, but not limited to, geographic location, Federal Government contract labor categories and contract wage rates, relevant prior work experience, specific skills and competencies, education, and certifications. Our employees value the flexibility at CACI that allows them to balance quality work and their personal lives. We offer competitive compensation, benefits and learning and development opportunities. Our broad and competitive mix of benefits options is designed to support and protect employees and their families. At CACI, you will receive comprehensive benefits such as; healthcare, wellness, financial, retirement, family support, continuing education, and time off benefits. Since this position can be worked in more than one location, the range shown is the national average for the position. The proposed salary range for this position is: $98,500-$206,800 CACI is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, age, national origin, disability, status as a protected veteran, or any other protected characteristic.

Posted 30+ days ago

Langan logo

Environmental Practice Lead - Data Centers

LanganChicago, IL

$112,000 - $200,000 / year

Langan provides expert land development engineering and environmental consulting services for major developers, renewable energy producers, energy companies, corporations, healthcare systems, colleges/universities, and large infrastructure programs throughout the U.S. and around the world. Our employees collaborate seamlessly among 40+ offices and gain valuable hands-on experience that fosters career growth. Langan culture is entrepreneurial from advancing innovative technical solutions, to participating in robust training and knowledge sharing, to making progressive change within the communities we live and work. Consistently ranked among the top ten "Best Firms to Work For" and Engineering News-Record's top 50 firms worldwide, Langan attracts and retains the best talent in the industry. Employees thrive at Langan, a firm that fosters an inclusive and supportive work environment for all; prioritizes wellbeing, health, and safety; encourages volunteerism and philanthropy; offers workplace flexibility, along with carbon-neutral office spaces; and empowers individuals to contribute their skills and knowledge to make impactful contributions. Job Summary Langan is seeking an Environmental Practice Lead - Data Centers to join it's collaborative national Data Centers practice. This individual will serve a key function in leading and building on an existing team of experienced consultants that will bring high quality and timely solutions to this dynamic sector. The services that the successful candidate would be involved in include environmental due diligence, site assessment, remediation services, air quality modeling and permitting, and water and wastewater, as well as integrating the environmental services with our existing top-of-class site/civil and geotechnical engineering data center capabilities. In this role, you will have the opportunity to join an industry-leading engineering and environmental consultancy as it continues to grow its data center practice. Job Responsibilities Apply knowledge, technical expertise, strategic insight, and leadership to elevate our environmental capabilities and visibility within the data center industry; Refinement of data center market strategy and materials; Overall responsibility for work product quality, expansion of services, revenue growth, and profitability; Identification of technical and geographic growth areas, including oncoming trends; Work closely with Data Centers and Environmental practice leadership to develop growth plans and ensure staff utilization; Apply entrepreneurial demeanor and participate in Business Development activities by developing new clients for the firm and maintaining relationships with existing clients; Integration with firm-wide multi-discipline data center team; Provide direct oversight and management of junior staff for specific project assignments. Coach, train, and motivate staff assigned to Environmental projects. Participate in interviewing and hiring staff; and Perform complex analyses for specific portions of broader engineering and environmental projects. Qualifications 15+ years of experience in the environmental industry with 5+ years of direct experience in data center/mission critical industry; Bachelor's degree in Environmental Engineering, Geology, Hydrogeology, or Science; Master's degree preferred; Professional certification/credentials preferred (PE, CHMM, PG, etc.); In-depth knowledge and understanding of environmental regulations and experience providing strategic development, permitting and operational support; Strong people, project, and client management skills; Excellent public speaking, written, and verbal communication skills; Strong attention to detail with excellent analytical, multitasking, and judgment capabilities; Ability to effectively work independently and in a team environment; and Possess reliable transportation for client meetings and job site visits and a valid driver's license in good standing. Langan provides a rich array of programs and benefits to help its employees advance their careers and enhance the quality of their lives. Our comprehensive compensation package includes: full-time employment company paid medical, dental, and vision coverage; life insurance, short- and long-term disability insurance, and paid pregnancy disability leave; 401(k)/Roth with company match; paid time off including parental and military leave; employee referral and professional license bonuses; and educational reimbursement. Langan offers employee resource groups; flexible work schedules; extensive training; wellness programs; buddy and mentoring programs; and much more! Langan is committed to providing equal employment opportunities to all qualified applicants and employees, including individuals with disabilities and protected veterans. We believe that an inclusive workplace is essential for the well-being and success of our employees. Certain US jurisdictions require Langan to include an estimate of salary or hourly ranges. The estimated range for this role is: $112,000 - $200,000. Actual compensation may vary based on factors such as related work experience, location, market conditions, education/training, certifications and other credentials, as well as applicable knowledge and skills. Certain roles may be eligible for overtime and participation in the firm's annual bonus and performance review program. Bonuses are discretionary and based on individual job performance and the profitability of the firm. Employees are also eligible to receive up to 20 days of paid vacation time, 10 days of paid sick time and 10 paid holidays throughout the year. Eligibility and actual paid time off may vary based on local law and factors such as hours worked, related work experience and level.

Posted 30+ days ago

Freeform logo

Software Engineer (Edge Data & Telemetry)

FreeformLos Angeles, CA

$115,000 - $150,000 / year

SOFTWARE ENGINEER (EDGE DATA & TELEMETRY) Freeform is deploying software-defined, autonomous metal 3D printing factories around the world, bringing the scalability of software to physical production. Our proprietary technology stack leverages advanced sensing, real-time controls, and data-driven learning to produce digitally verified, flawless parts at unprecedented speed and cost. Our mission is to make the transformative power of 3D printing available to all industries at scale and unlock the future of innovation. In this role, you will be responsible for architecting and implementing factory edge services, backend data pipeline infrastructure and advanced telemetry capabilities of our metal 3D printing systems. You will work with petabyte-scale data feeds, enabling Freeform engineers to rapidly access critical data across the factory ecosystem. You will work with our machine learning team to provide data pipelines that feed the printing industry's first GPU-based ML platform. Ultimately, your code will enable the first production scale, high quality, and fully automated metal 3D printing factory. 3D printing experience is not required to be successful here - rather we look for smart, motivated, collaborative engineers who love solving hard problems and creating amazing technology! Responsibilities: Design and implement a horizontally scalable platform that ingests millions of hardware sensor data points per second. Deliver high-quality data services that enable the machine learning team to access real-time and historical print data. Develop durable, efficient database solutions for real-time queries and large-scale analytics workloads. Create and operate data pipelines to collect, transform, and query petabyte-scale sensor and telemetry data. Write, test, and deploy reliable, maintainable software that improves automation and data quality across the factory ecosystem. Perform system calibrations, capture quality metrics, and monitor the long-term health and performance of printing systems. Basic Qualifications: 3+ years of professional experience in software development. Proficiency writing, deploying, and maintaining performance oriented backend services (preferably C++, Rust, Go, or Java). Experience developing robust data pipeline software. Nice to Have: Degree in computer science, engineering, mathematics, or related field. Experience with Arrow, Parquet, Iceberg, Spark, Kafka. Experience with time series databases, big data warehousing solutions and cloud computing. Experience with containerized applications and deployment via Kubernetes. Experience deploying to and working with physical hardware. Experience working with high-speed high-volume data processing pipelines. Familiarity with 3D data and graphics. Creative thinker able to apply first principles reasoning to solve complex problems. Excellent verbal and written communication skills Location: We are located in Hawthorne, CA in a 35,000 square foot, state-of-the-art facility featuring large open spaces for team collaboration, R&D, and production, as well as easy access to the 405, 105, and 110 freeways. Our facility is in the heart of Los Angeles' vibrant emerging tech ecosystem alongside many other high growth startups and enterprises. What We Offer: We have an inclusive and diverse culture that values collaboration, learning, and making deliberate data-driven decisions. We offer a unique opportunity to be an early and integral member of a rapidly growing company that is scaling a world-changing technology. Benefits Significant stock option packages 100% employer-paid Medical, Dental, and Vision insurance (premium PPO and HMO options) Life insurance Traditional and Roth 401(k) Relocation assistance provided Paid vacation, sick leave, and company holidays Generous Paid Parental Leave and extended transition back to work for the birthing parent Free daily catered lunch and dinner, and fully stocked kitchenette Casual dress, flexible work hours, and regular catered team building events Compensation As a growing company, the salary range is intentionally wide as we determine the most appropriate package for each individual taking into consideration years of experience, educational background, and unique skills and abilities as demonstrated throughout the interview process. Our intent is to offer a salary that is commensurate for the company's current stage of development and allows the employee to grow and develop within a role. In addition to the significant stock option package, the estimated salary range for this role is $115,000-$150,000, inclusive of all levels/seniority within this discipline. Freeform is an Equal Opportunity Employer that values diversity; employment with Freeform is governed on the basis of merit, competence and qualifications and will not be influenced in any manner by race, color, religion, gender, national origin/ethnicity, veteran status, disability status, age, sexual orientation, gender identity, marital status, mental or physical disability or any other legally protected status.

Posted 30+ days ago

Oscar Health Insurance logo

Analyst, Payment Integrity (Data Mining)

Oscar Health InsuranceTempe, AZ

$25 - $33 / hour

Hi, we're Oscar. We're hiring an Analyst, Data Mining to join our Payment Integrity team. Oscar is the first health insurance company built around a full stack technology platform and a relentless focus on serving our members. We started Oscar in 2012 to create the kind of health insurance company we would want for ourselves-one that behaves like a doctor in the family. About the role: The Data Mining Analyst is responsible for reviewing claims data to identify incorrect payments. The analyst supports payment integrity quality control for incorrect payments identified both internally and by vendors. You will report into the Senior Manager, Payment Integrity Work Location: This position is based in our Tempe, Arizona office, requiring a hybrid work schedule with 3 days of in-office work per week. Thursdays are a required in-office day for team meetings and events, while your other two office days are flexible to suit your schedule. #LI-Hybrid Pay Transparency: The base pay for this role is: $25.44 - $33.39 per hour. You are also eligible for employee benefits and monthly vacation accrual at a rate of 15 days per year Responsibilities: Analyze and investigate claims data Support quality review claim and concept findings from internal and external partners regarding adverse claim outcomes. Utilize data analysis skills and tools to develop accurate, quantitative analyses of issues. Work with the team to identify thematic areas of opportunity to reduce incorrect payments. Mediate errors with key stakeholders Compliance with all applicable laws and regulations Other duties as assigned Requirements: 2+ years of experience in healthcare 2+ years of experience working with large claims data sets using excel or a database language Bonus points: Some coding experience or database language exposure 2+ years of related work experience in payment integrity data mining Experience using SQL This is an authentic Oscar Health job opportunity. Learn more about how you can safeguard yourself from recruitment fraud here. At Oscar, being an Equal Opportunity Employer means more than upholding discrimination-free hiring practices. It means that we cultivate an environment where people can be their most authentic selves and find both belonging and support. We're on a mission to change health care -- an experience made whole by our unique backgrounds and perspectives. Pay Transparency: Final offer amounts, within the base pay set forth above, are determined by factors including your relevant skills, education, and experience. Full-time employees are eligible for benefits including: medical, dental, and vision benefits, 11 paid holidays, paid sick time, paid parental leave, 401(k) plan participation, life and disability insurance, and paid wellness time and reimbursements. Artificial Intelligence (AI): Our AI Guidelines outline the acceptable use of artificial intelligence for candidates and detail how we use AI to support our recruiting efforts. Reasonable Accommodation: Oscar applicants are considered solely based on their qualifications, without regard to applicant's disability or need for accommodation. Any Oscar applicant who requires reasonable accommodations during the application process should contact the Oscar Benefits Team (accommodations@hioscar.com) to make the need for an accommodation known. California Residents: For information about our collection, use, and disclosure of applicants' personal information as well as applicants' rights over their personal information, please see our Privacy Policy.

Posted 4 days ago

Anthropic logo

Data Center Strategic Sourcing Lead

AnthropicSan Francisco, CA

$290,000 - $435,000 / year

About Anthropic Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. Data Center IT Strategic Sourcing Lead About Anthropic Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role We are hiring an Strategic Sourcing lead to drive development and execution of procurement strategies to help scale Anthropic's Data Center IT equipment footprint. In this role, you will own end-to-end sourcing strategies for critical IT infrastructure such as fiber, network switch gear, and high performance compute systems. You will partner closely with engineering, demand planning, and third-parties to drive hardware availability, establish vendor relationships, and negotiate commercial agreements. As a sourcing lead focused on key Data Center equipment you will play a critical role in helping Anthropic to achieve its ambitious scaling goals. Responsibilities: Drive commercial negotiations and Master Service Agreements for critical IT infrastructure such as networking equipment, fiber connectivity, compute hardware. Audit partner procurement schedules, ensure optimal allocation, and drive cost efficiency of supply in constrained markets. Implement frameworks for ensuring third-party datacenter partners adhere to commercial equipment terms and technical specifications. Develop global inventory strategy including warehousing, third-party logistics, buffer stock, and distribution models to accelerate time to market and improve availability Proactively monitor global supply chain volatility and execute contingency plans to avoid critical project delays. Translate technical roadmaps into equipment forecasts and procurement pipelines. Partner with Finance to optimize inventory strategy. Establish efficient return and replacement workflows with vendors to restore failed equipment quickly. Negotiate warranty terms and track vendor performance on RMA execution. You may be a good fit if you: Have 7+ years experience in supply chain management, sourcing and procurement Have a track record of developing processes for forecasting, planning, and managing data center material for build-out and/or operations. Are familiar with data center network and compute hardware (e.g. switches, routers, cables, servers, PDUs, etc). Possess a bachelor's degree in relevant domain or equivalent practical experience. It's a bonus if you have: 10+ years experience in supply chain management, sourcing and procurement. 5+ years experience in data center infrastructure. Experience working with general contractors, integrators and suppliers. Experience working on data center construction programs. Leadership and influencing skills. The annual compensation range for this role is listed below. For sales roles, the range provided is the role's On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $290,000-$435,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links-visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact - advancing our long-term goals of steerable, trustworthy AI - rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

Posted 30+ days ago

PwC logo

AI & Genai Data Scientist - Manager

PwCSacramento, CA

$99,000 - $232,000 / year

Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Enhancing your leadership style, you motivate, develop and inspire others to deliver quality. You are responsible for coaching, leveraging team member's unique strengths, and managing performance to deliver on client expectations. With your growing knowledge of how business works, you play an important role in identifying opportunities that contribute to the success of our Firm. You are expected to lead with integrity and authenticity, articulating our purpose and values in a meaningful way. You embrace technology and innovation to enhance your delivery and encourage others to do the same. Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Analyse and identify the linkages and interactions between the component parts of an entire system. Take ownership of projects, ensuring their successful planning, budgeting, execution, and completion. Partner with team leadership to ensure collective ownership of quality, timelines, and deliverables. Develop skills outside your comfort zone, and encourage others to do the same. Effectively mentor others. Use the review of work as an opportunity to deepen the expertise of team members. Address conflicts or issues, engaging in difficult conversations with clients, team members and other stakeholders, escalating where appropriate. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Minimum Degree Required Bachelor's Degree Minimum Year(s) of Experience 7 year(s) Demonstrates extensive-level abilities and/or a proven record of success managing the identification and addressing of client needs: Managing development teams in building of AI and GenAI solutions, including but not limited to analytical modeling, prompt engineering, general all-purpose programming (e.g., Python), testing, communication of results, front end and back-end integration, and iterative development with clients Documenting and analyzing business processes for AI and Generative AI opportunities, including gathering of requirements, creation of initial hypotheses, and development of AI/GenAI solution approach Collaborating with client team to understand their business problem and select the appropriate models and approaches for AI/GenAI use cases Designing and solutioning AI/GenAI architectures for clients, specifically for plugin-based solutions (i.e., ChatClient application with plugins) and custom AI/GenAI application builds Managing teams to process unstructured and structured data to be consumed as context for LLMs, including but not limited to embedding of large text corpus, generative development of SQL queries, building connectors to structured databases Managing daily operations of a global data and analytics team on client engagements, review developed models, provide feedback and assist in analysis; Directing data engineers and other data scientists to deliver efficient solutions to meet client requirements; Leading and contributing to development of proof of concepts, pilots, and production use cases for clients while working in cross-functional teams; Facilitating and conducting executive level presentations to showcase GenAI solutions, development progress, and next steps Structuring, write, communicate and facilitate client presentations; and, Managing associates and senior associates through coaching, providing feedback, and guiding work performance. Demonstrates extensive abilities and/or a proven record of success learning and performing in functional and technical capacities, including the following areas: Managing GenAI application development teams including back-end and front-end integrations Using Python (e.g., Pandas, NLTK, Scikit-learn, Keras, etc.), common LLM development frameworks (e.g., Langchain, Semantic Kernel), Relational storage (SQL), Non-relational storage (NoSQL); Experience in analytical techniques such as Machine Learning, Deep Learning and Optimization Vectorization and embedding, prompt engineering, RAG (retrieval, augmented, generation) workflow dev Understanding or hands on experience with Azure, AWS, and / or Google Cloud platforms Experience with Git Version Control, Unit/Integration/End-to-End Testing, CI/CD, release management, etc. Travel Requirements Up to 80% Job Posting End Date Learn more about how we work: https://pwc.to/how-we-work PwC does not intend to hire experienced or entry level job seekers who will need, now or in the future, PwC sponsorship through the H-1B lottery, except as set forth within the following policy: https://pwc.to/H-1B-Lottery-Policy . As PwC is an equal opportunity employer, all qualified applicants will receive consideration for employment at PwC without regard to race; color; religion; national origin; sex (including pregnancy, sexual orientation, and gender identity); age; disability; genetic information (including family medical history); veteran, marital, or citizenship status; or, any other status protected by law. For only those qualified applicants that are impacted by the Los Angeles County Fair Chance Ordinance for Employers, the Los Angeles' Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, San Diego County Fair Chance Ordinance, and the California Fair Chance Act, where applicable, arrest or conviction records will be considered for Employment in accordance with these laws. At PwC, we recognize that conviction records may have a direct, adverse, and negative relationship to responsibilities such as accessing sensitive company or customer information, handling proprietary assets, or collaborating closely with team members. We evaluate these factors thoughtfully to establish a secure and trusted workplace for all. Applications will be accepted until the position is filled or the posting is removed, unless otherwise set forth on the following webpage. Please visit this link for information about anticipated application deadlines: https://pwc.to/us-application-deadlines The salary range for this position is: $99,000 - $232,000. Actual compensation within the range will be dependent upon the individual's skills, experience, qualifications and location, and applicable employment laws. All hired individuals are eligible for an annual discretionary bonus. PwC offers a wide range of benefits, including medical, dental, vision, 401k, holiday pay, vacation, personal and family sick leave, and more. To view our benefits at a glance, please visit the following link: https://pwc.to/benefits-at-a-glance

Posted 3 weeks ago

Infosys LTD logo

Data Scientist

Infosys LTDNew York, NY

$94,000 - $164,500 / year

Job Description Infosys is seeking a Data Scientist / Gen AI Lead Consultant with ZGenerative AI, Agentic AI, Machine Learning (ML), AI and Python experience. Ideal candidate is expected to have prior experience in end-to-end implementation of Gen AI and Agentic AI based solution, fine tuning large language models, Machine Learning models that includes identification of 'right' problem, designing 'optimum' solution, implementing using 'best in class' practices and deploying the models to production. Will work in alignment with data strategy at various clients, using multiple technologies and platforms. Required Qualifications: Bachelor's Degree or foreign equivalent will also consider three years of progressive experience in the specialty in lieu of every year of education. At least 7 years of Information Technology experience At least 4 years of hands-on GenAI / Agentic AI and data science with machine learning Strong proficiency in Python programming. Experience of deploying the Gen AI applications with one of the Agent Frameworks like Langgraph, Autogen, Crew AI. Experience in deploying the Gen AI stack/services provided by various platforms such as AWS, GCP, Azure, IBM Watson Experience in Generative AI and working with multiple Large Language Models and implementing Advanced RAG based solutions. Experience in processing/ingesting unstructured data from PDFs, HTML, Image files, audio to text etc. Experience with data gathering, data quality, system architecture, coding best practices Hands-on experience with Vector Databases (such as FAISS, Pinecone, Weaviate, or Azure AI Search). Experience with Lean / Agile development methodologies This position may require travel, will involve close co-ordination with offshore teams This position is located in Bridgewater, NJ; Sunnyvale, CA; Austin, TX; Raleigh, NC; Richardson, TX; Tempe, AZ; Phoenix, AZ; Charlotte, NC; Houston, TX; Denver, CO; Hartford, CT; New York, NY, Palm Beach, FL; Tampa, FL or Alpharetta, GA, or is willing to relocate. Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time Preferred Data Scientist Qualifications: 4 years of hands-on experience with more than one programming language; Python, R, Scala, Java, SQL Hands-on experience with CI/CD pipelines and DevOps tools like Jenkins, GitHub Actions, or Terraform. Proficiency in NoSQL and SQL databases (PostgreSQL, MongoDB, CosmosDB, DynamoDB). Deep Learning experience with CNNs, RNN, LSTMs and the latest research trends Experience in Python AI/ML frameworks such as TensorFlow, PyTorch, or LangChain. Strong understanding and experience of LLM fine-tuning, local deployment of open-source models Proficiency in building RESTful APIs using FastAPI, Flask, or Django. Experience in Model evaluation tools like DeepEval, FMeval, RAGAS , Bedrock model evaluation. Experience with perception (e.g. computer vision), time series data (e.g. text analysis) Big Data Experience strongly preferred, HDFS, Hive, Spark, Scala Data visualization tools such as Tableau, Query languages such as SQL, Hive Good applied statistics skills, such as distributions, statistical testing, regression, etc. The job entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email or face to face. Travel may be required as per the job requirements. The estimated annual compensation range for candidates in the below locations will be- Sunnyvale, CA; Bridgewater, NJ; New York, NY, Denver, CO: $94000 to $164500 Along with competitive pay, as a full-time Infosys employee, you are also eligible for the following benefits : Medical/Dental/Vision/Life Insurance Long-term/Short-term Disability Health and Dependent Care Reimbursement Accounts Insurance (Accident, Critical Illness, Hospital Indemnity, Legal) 401(k) plan and contributions dependent on salary level Paid holidays plus Paid Time Off

Posted 3 days ago

TAG - The Aspen Group logo

Senior Data Engineer

TAG - The Aspen GroupChicago, IL

$129,000 - $160,000 / year

The Aspen Group (TAG) is one of the largest and most trusted retail healthcare business support organizations in the U.S., supporting 15,000 healthcare professionals and team members at more than 1,000 health and wellness offices across 46 states in four distinct categories: Dental Care, Urgent Care, Pet Care, and Medical Aesthetics. Working in partnership with independent practice owners and clinicians, the team is united by a single purpose: to prove that healthcare can be better and smarter for everyone. TAG provides a comprehensive suite of centralized business support services that power the impact of four consumer-facing businesses: Aspen Dental, ClearChoice Dental Implant Centers, WellNow Urgent Care, Lovet Veterinary Clinics and Chapter Aesthetic Studio. Each brand has access to a deep community of experts, tools and resources to grow their practices, and an unwavering commitment to delivering high-quality consumer healthcare experiences at scale. We are seeking a Senior Data Engineer who will partner with business, analytics, and engineering teams to design and build data structures to facilitate reporting, models, and monitor key performance metrics. Collaborating across disciplines, you will identify internal/external data sources to design data assets, define a data pipeline strategy & automated testing and implement scalable data solutions. This person will work with our emerging brands that have a very nascent existing data infrastructure. This person will have the opportunity to bring TAG and industry best practices to a new business and be responsible for creating a new base infrastructure that will allow management to effectively operate the business. Additional Background Across TAG, there are several digital transformations in flight. This includes a platform modernization program with a cloud-native migration that will empower an exciting digitalization of the dental industry with advances in 3D printing, scanning, AI, and much more. Now is a great time to join us to design and build the future of the TAG Data Platform. Be a part of a new team building cloud-native solutions with open-source tools and technologies. You will partner with the business to build self-service data assets and modern data science/ analytics solutions. Responsibilities Partner cross-functionally to understand data, reporting, and data asset requirements Work with engineering teams to collect required data from internal and external systems Design ELT and ETL strategy to build performant data solutions that are reliable and scalable in a fast-growing cloud-native data ecosystem using tools such as dbt Rebuilding and automating legacy reporting pipelines to a new platform Contribute toward evolving company's analytical self-service/ ad hoc reporting/Data Analytics/Analytical Modeling/Data Visualization process and/or product Contribute to evolving Data platform engineering process utilizing open-source technologies like Apache Spark, Kafka Develop and conduct automated testing Develop and maintain routines using orchestration tools such as Airflow, Prefect, Kubeflow Document and publish Metadata and table designs to facilitate data adoption. Perform pipeline tuning as necessary Qualifications/Requirements Bachelor's Degree or equivalent combination of education, training, and experience; Master's Degree preferred 5+ years of IT/ Analytics/ Data Science experience Experience with SQL Hands-on experience with Python and/or other programming languages Experience with best practices in automated testing for automated pipelines Experience with building out and maintaining metrics, alarms, and monitoring of services Desired Characteristics Strong background in Health Care Data and related engineering experiences Strong desire to work with analytics data and partners Experience in MS SQL, SQL Server Integration Services (SSIS), and SQL Server Reporting Services (SSRS) would be a plus Experience in GCP, AWS, Azure would be a plus AWS, GCP certifications (Solution Architect, Engineering, Big Data Specialty, etc.) would be a plus Experience/Exposure in Data Science and/or Analytics in cloud would be a plus Experience in Machine Learning/Deep Learning use cases and development would be a plus Experience in Data Cleansing/Transformation would be a plus Experience in Container, Kubernetes, and related technology ecosystems would be a plus Experience in Data Modeling would be a plus Experience in Event-Driven Architecture, Streaming processing would be plus Experience in sourcing and processing structured, semi-structured, and unstructured data would be a plus Outstanding written and verbal communication skills This role is onsite 4 days/week in our Chicago office (Fulton Market District) A generous benefits package that includes paid time off, health, dental, vision, and 401(k) savings plan with match Salary: $129,000-160,000/year

Posted 30+ days ago

L logo

Data Engineer

Laura Mercier Cosmetics and ReVive SkincareColumbus, OH

$80,500 - $100,500 / year

About Us Orveon is a new kind of beauty company launched in December 2021 when we acquired our three iconic brands - bareMinerals, BUXOM, and Laura Mercier. With more than 600 associates, operating in 40+ countries, we're truly a global business. Our headquarters are in New York, with additional locations in major cities worldwide. We love our brands and are embarking on a powerful shift: To change how the world thinks about beauty. We are a collective of premium and prestige beauty brands committed to making beauty better and creating consumer love. People here are passionate, innovative, and thoughtful. This is an inspirational group of talented people, working together to build something better. We are looking for the best talent to join us on that journey. We believe we can accomplish more when we move as one. About The Role We are seeking a skilled and motivated Data Engineer to join our team. As a key contributor to our data architecture, you will play a central role in designing, building, and maintaining scalable data pipelines and solutions using Microsoft Fabric. You will collaborate closely with Power BI developers, and business analysts to ensure data is accessible, reliable, and optimized for analytics and decision-making. Primary Duties & Responsibilities Design, develop, and maintain robust data pipelines using Microsoft Fabric, including Data Factory, OneLake, and Lakehouse. Integrate data from various sources (structured and unstructured) into centralized data platforms. Collaborate with Data Architects to implement scalable and secure data models. Optimize data workflows for performance, reliability, and cost-efficiency. Ensure data quality, governance, and compliance with internal and external standards. Support Power BI developers and business analysts with curated datasets and semantic models. Monitor and troubleshoot data pipeline issues and implement proactive solutions. Document data processes, architecture, and best practices. Qualifications Bachelor's degree in Computer Science, Data Science, Information Technology or related field. 3+ years hands on experience in data engineering Proficiency in Apache Spark. Strong programming skills in Python, SQL, with experience in CI/CD Experience in data modeling. Best practices in managing lakehouses and warehouses. Strong problem-solving skills and attention to detail. Excellent communication and collaboration abilities. Familiarity with MS fabric technologies and tools Familiarity with version control (Git/Azure Devops) Microsoft Data technologies especially Power BI Experience with Azure data services such as Data factory, synapse, purview, logic/function apps. What Orveon Offers You You are a creator of Orveon's success and your own. This is a rare opportunity to share your voice, accelerate your career, drive innovation and fostering growth. We're a human sized company so your work will have a big impact on the organization. We invest in the well-being of our Orveoners - both personally and professionally and provide tailored benefits to support all of you, such as: "Hybrid First" Model 2-3 days per week in office, balancing virtual and face-to-face interactions. "Work From Anywhere" - Freedom to work three (3) weeks annually from the lo-cation of your choice. Complimentary Products- Free and discounted products on new releases and fan-favorites. Professional Development- Exposure to senior leadership, learning and development programs, and career advancement opportunities. Community Engagement- Volunteer opportunities in the communities in which we live and work. Other things to know! Pay Transparency- One of our values is Stark Honesty, and the following represents a good faith estimate of the compensation range for this position. At Orveon Global, we carefully consider a wide range of non-discriminatory factors when deter-mining salary. Actual salaries will vary depending on factors including but not limited to location, education, experience, and qualifications. The pay range for this position $80,500-$100,500. Supplemented with all the amazing benefits above for full-time employees! Opportunities and Accommodations- Orveon is deeply committed to building a workplace and global community where inclusion is not only valued but prioritized. Find out more on our careers page. BE AWARE OF FRAUD! Please be aware of potentially fraudulent job postings or suspicious recruiter activity by persons that are posing as Orveon Global Recruiters/HR. Please confirm that the person you are working with has an @orveonglobal.com email address. Additionally, Orveon Global does NOT request financial information or payments from candidates at any point during the hiring process. If you suspect fraudulent activity, please visit the Orveon Global Careers Site at https://www.orveonglobal.com/career to verify the posting and apply though our secure online portal.

Posted 30+ days ago

DPR Construction logo

Sr. Data Scientist

DPR ConstructionAtlanta, GA

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

Job Description

DPR Construction is seeking a skilled Senior Data Scientist to help advance our data-driven approach to building. In this role, you'll use statistical analysis, machine learning, and data visualization to turn complex construction and business data into actionable insights that improve project planning, cost forecasting, resource management, and safety. Working with project and operations teams, you'll build and deploy scalable, secure data solutions on cloud platforms like Azure and AWS, driving innovation and operational excellence across DPR's projects.

Responsibilities

  • Data analysis and modeling: Analyze large datasets to identify trends, bottlenecks, and areas for improvement in operational performance. Build predictive and statistical models to forecast demand, capacity, and potential issues.
  • Develop and deploy models: Build, test, and deploy machine learning and AI models to improve operational processes.
  • Analyze operational data: Examine data related to projects, production, supply chains, inventory, and quality control to identify patterns, trends, and inefficiencies.
  • Optimize processes: Use data-driven insights to streamline workflows, allocate resources more effectively, and improve overall performance.
  • Forecast and predict: Create predictive models to forecast outcomes, such as demand, and inform strategic decisions.
  • Communicate findings: Present findings and recommendations to stakeholders through reports, visualizations, and presentations.
  • Ensure reliability: Build and maintain reliable, scalable, and efficient data science systems and processes.
  • Collaboration: Partner with project managers, engineers, and business leaders to ensure data solutions are aligned with organizational goals and deliver tangible improvements.
  • Continuous Learning: Stay current with advancements in data science and machine learning to continually enhance the company's data capabilities.
  • Reporting and communication: Create dashboards and reports that clearly communicate performance trends and key insights to leadership and other stakeholders. Translate complex data into actionable recommendations.
  • Performance monitoring: Implement data quality checks and monitor the performance of models and automated systems, creating feedback loops for continuous improvement.
  • Experimentation: Design and evaluate experiments to quantify the impact of new systems and changes on operational outcomes.

Qualifications

  • Bachelor's or Master's degree in Data Science, Computer Science, Statistics, Engineering, or a related field.
  • 7+ years of experience in data science roles within AEC, product or technology organizations.
  • At least 4 years of experience working with cloud platforms, specifically Azure and AWS, for model deployment and data management.
  • Strong proficiency in Python or R for data analysis, modeling, and machine learning, with experience in relevant libraries (e.g., Scikit-learn, TensorFlow, PyTorch) and NLP frameworks (e.g., GPT, Hugging Face Transformers).
  • Expertise in SQL for data querying and manipulation, and experience with data visualization tools (e.g., Power BI, Tableau).
  • Solid understanding of statistical methods, predictive modeling, and optimization techniques.
  • Expertise in statistics and causal inference, applied in both experimentation and observational causal inference studies.
  • Proven experience designing and interpreting experiments and making statistically sound recommendations.
  • Strategic and impact-driven mindset, capable of translating complex business problems into actionable frameworks.
  • Ability to build relationships with diverse stakeholders and cultivate strong partnerships.
  • Strong communication skills, including the ability to bridge technical and non-technical stakeholders and collaborate across various functions to ensure business impact.
  • Ability to operate effectively in a fast-moving, ambiguous environment with limited structure.
  • Experience working with construction-related data or similar industries (e.g., engineering, manufacturing) is a plus.

Preferred Skills

  • Familiarity with construction management software (e.g., ACC, Procore, BIM tools) and knowledge of project management methodologies.
  • Hands-on experience with Generative AI tools and libraries.
  • Background in experimentation infrastructure or human-AI interaction systems.
  • Knowledge of time-series analysis, anomaly detection, and risk modeling specific to construction environments.

DPR Construction is a forward-thinking, self-performing general contractor specializing in technically complex and sustainable projects for the advanced technology, life sciences, healthcare, higher education and commercial markets. Founded in 1990, DPR is a great story of entrepreneurial success as a private, employee-owned company that has grown into a multi-billion-dollar family of companies with offices around the world.

Working at DPR, you'll have the chance to try new things, explore unique paths and shape your future. Here, we build opportunity together-by harnessing our talents, enabling curiosity and pursuing our collective ambition to make the best ideas happen. We are proud to be recognized as a great place to work by our talented teammates and leading news organizations like U.S. News and World Report, Forbes, Fast Company and Newsweek.

Explore our open opportunities at www.dpr.com/careers.

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.

pay-wall