Job Description

Required Technical Skill Set: Hadoop, PySpark, Spark SQL, HIVE Hands-on experience of Hadoop, PySpark, Spark SQL, Hive, Hadoop Big DataEco System Tools. Should be able to develop, tweak queries and work on performanceenhancement. Solid understanding of object-oriented programming and HDFS concepts The candidate will be responsible for delivering code, setting upenvironment, connectivity, deploying the code in production after testing.Good-to-Have Preferable to have good DWH/ Data Lake knowledge. Conceptual and creative problem-solving skills, ability to work withconsiderable ambiguity, ability to learn new and complex concepts quickly. Experience in working with teams in a complex organization involvingmultiple reporting lines The candidate should have good DevOps and Agile Development Frameworkknowledge.Responsibility of / Expectations from the Role; Need to work as a developer in Cloudera Hadoop. Work on Hadoop, PySpark, Spark SQL, Hive, Bigdata Eco System Tools. Experience in working with teams in a complex organization involving multiple reporting lines. The candidate should have strong functional and technical knowledge to deliver what is required andhe/she should be well acquainted with Banking terminologies. The candidate should have strong DevOps and Agile Development Framework knowledge. Create PySpark jobs for data transformation and aggregation. Experience with stream-processing systems like Spark-Streaming.

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

You May Also Be Interested In