
Staff Data Engineer, Big Data Engineering
Automate your job search with Sonara.
Submit 10x as many applications with less effort than one manual application.1
Reclaim your time by letting our AI handle the grunt work of job searching.
We continuously scan millions of openings to find your top matches.

Job Description
The hands-on Staff Data Engineer is responsible for designing and architecting highly scalable data storage solutions and platforms/tools for vehicle data, manufacturing, sales, finance and other data systems. If you enjoy architecting large-scale systems, working with a talented team of engineers, and collaborating with some of the brightest minds in the Automotive industry, Lucid is the place to experience it.
The Role:
• Data Platform Leadership: Lead the architecture, design, and development of big data streaming systems and tools on business priorities and cost optimization.
• Technology Utilization: Leverage cutting-edge distributed big data technologies such as Docker, Kubernetes, Spark, MQTT, gRPC, rosbag, Kafka, S3 Data Lake, and video/audio processing.
• Team Leadership: Act as a hands-on technical leader, guiding the team in making key architectural decisions, unblocking challenges, and ensuring successful execution.
• Application Architecture: Collaborate on application architecture and deep understanding of cloud data applications. Being able to close the gap between infrastructure and the data team.
• Strategic Vision for Data Integration: Drive the technical and strategic vision for our data platform and work closely with the infrastructure team to meet current and future scalability and interoperability needs.
You Bring:
• B.S. or M.S. in Computer Science or a related field.
• 10+ years of hands-on industry experience in data architecture or software engineering.
• 3+ years of experience architecting solutions and platform deployment for big data environments.
• Deep knowledge of modern data/compute infrastructure and frameworks, such as Docker, Kubernetes, Spark, Kafka, S3 Data Lake, Parquet, rosbag, MQTT, and gRPC.
• Proven experience in architecting streaming, real-time, and near real-time data pipelines using tools like Spark or Kafka.
• Working knowledge of data lake architectures with Parquet and HCatalog.
• Ability to cultivate a strong sense of ownership and build a culture of accountability within the team.
• Openness to new ideas and the ability to evaluate multiple approaches, selecting the best based on fundamental qualities and supporting data.
• Exceptional communication skills to articulate highly technical problems and solutions to diverse audiences, from engineers to executive leadership.
• Proven track record of setting technical vision and delivering results in a collaborative, cross-functional environment.
By Submitting your application, you understand and agree that your personal data will be processed in accordance with our Candidate Privacy Notice. If you are a California resident, please refer to our California Candidate Privacy Notice.