Wednesday, October 29, 2025

Job Description

About the Role:We are looking for a passionate Data Engineer to join our growing data team, who will transform raw data into meaningful and reliable data products. In this role, you will play a key part in modernizing traditional ETL processes and architecting next-generation data platforms using big data technologies like Spark and Flink.Responsibilities:Design, develop, and manage end-to-end, efficient, and scalable data pipelines using big data processing technologies like Spark and Flink, as well as industry-standard ETL toolsCreate flexible data models and data warehousing solutions to support analytics and reporting processes that meet business needsAnalyze existing ETL/ELT processes and SQL queries to implement improvements, optimize resource consumption, and enhance performanceCollaborate closely with data scientists, analysts, and business units to understand data requirements and deliver high-quality data productsEnsure the implementation of data governance and security standards, including data quality, data lineage, and reliabilityStay current with new technologies and approaches in the data field and proactively recommend improvements to the existing infrastructureRequired Qualifications:Bachelor's degree in Computer Science, Management Information Systems (MIS), Mathematics, or a related field3+ years of hands-on experience in Data Engineering or Data WarehousingProven experience in developing large-scale data pipelines and ETL/ELT workflows using Python and SparkHands-on experience with workflow scheduling platforms such as Airflow, Dagster, or similar technologiesAdvanced proficiency in SQL and experience with procedural SQL languages such as Oracle PL/SQLExperience working with structured and semi-structured data formats like Parquet, Avro, and JSONIn-depth knowledge of modern data architectures such as data lakes, data lakehouses, and core data modeling principlesExperience with at least one industry-standard ETL tool such as ODI, Informatica, or TalendPreferred Qualifications (Nice to have):Experience with data processing and optimization in cloud environments (AWS, Azure, GCP), with a preference for GCPExperience with real-time data processing and streaming technologies, such as KafkaFamiliarity with workflow management platforms like AirflowKnowledge of containerization technologies like Docker and Kubernetes, and CI/CD processesFamiliarity with BI tools such as Power BIExperience in the banking or finance industry is a significant advantagePersonal Attributes:Excellent communication skills in English, both written and verbalStrong analytical thinking, problem-solving skills, and a results-oriented mindsetA team player with the ability to communicate effectively with stakeholders at all technical levelsDetail-oriented with a commitment to delivering high-quality workFollow us:LinkedIn: https://www.linkedin.com/company/innovance-consultancyLinkedIn: https://www.linkedin.com/company/dataspectaInstagram: https://www.instagram.com/innovanceconsultancyInstagram: https://www.instagram.com/dataspecta

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

Related Jobs