landing_page-logo
iSoftTek Solutions Inc logo

Big Data Engineer

iSoftTek Solutions IncPhoenix, AZ
Apply

Automate your job search with Sonara.

Submit 10x as many applications with less effort than one manual application.1

Reclaim your time by letting our AI handle the grunt work of job searching.

We continuously scan millions of openings to find your top matches.

pay-wall

Job Description

Job Title: Big Data Engineer

Job Location: Phoenix, AZ

Job Type: Contract

Job Description:

iSoftTek Solutions Inc is seeking a talented Big Data Engineer to join our team. In this role, you will be responsible for developing and maintaining big data solutions and frameworks. You will work closely with cross-functional teams to design, implement, and optimize data processing and analytics systems utilizing Hadoop, Spark, and other big data technologies.

Responsibilities:

  • Design, develop, and implement big data solutions using technologies such as Hadoop and Spark
  • Develop scalable data pipelines for data ingestion, transformation, and analysis
  • Collaborate with data scientists and analysts to understand business requirements and design efficient data models and architectures
  • Optimize and tune big data applications for performance and scalability
  • Monitor and troubleshoot issues in big data systems and provide timely resolution
  • Ensure data quality and integrity in all stages of data processing

Requirements:

  • Bachelor's or Master's degree in Computer Science, Data Science, or a related field
  • 3+ years of experience as a Big Data Engineer
  • Strong knowledge and hands-on experience with big data technologies such as Hadoop, Spark, Hive, or Kafka
  • Proficiency in programming languages such as Java, Scala, or Python
  • Experience with data modeling and data warehousing concepts
  • Knowledge of SQL and NoSQL databases
  • Excellent problem-solving and analytical skills
  • Strong communication and collaboration skills
  • Ability to work independently and in a team environment

Must Have Skills:

  • Proficiency in Java or Python
  • Experience with Spark or PySpark
  • Strong understanding of SQL and SQL Query
  • Experience with Shell or Unix Scripting
  • Working Experience of Hive