Job Description
Location: Dublin City CentreWork Arrangement: Hybrid (4 days in-office, 1 day from home)Role Description:=================This position requires a Senior Spark Data Engineer to design, build, and maintain data pipelines and infrastructure. The role involves working within a team using the SCRUM framework and collaborating with various stakeholders on data requirements.Key Responsibilities:====================Develop and maintain data pipelines using Spark (PySpark) and Python.Utilise AWS services, including AWS Glue, Step Functions, Lambda, IAM, and S3 for data processing and analytics tasks.Manage data warehousing solutions, incorporating technologies such as Apache Iceberg.Participate in the SCRUM process by estimating and articulating effort for sprint tasks.Required Experience and Skills:===============================Demonstrable experience as a Senior Data Engineer.Deep knowledge of Spark (PySpark).Proficiency in Python for data engineering purposes.General understanding of AWS services related to data and analytics (e.g., AWS Glue, Step Functions/Lambda, IAM, S3).Familiarity with Apache Iceberg.Experience working in a SCRUM/Agile environment.Ability to estimate task effort and communicate effectively within a sprint structure.Strong communication and collaboration skills.Desirable Experience:=====================A background in the finance industry.
Job Application Tips
- Tailor your resume to highlight relevant experience for this position
- Write a compelling cover letter that addresses the specific requirements
- Research the company culture and values before applying
- Prepare examples of your work that demonstrate your skills
- Follow up on your application after a reasonable time period