Job Description

Key Responsibilities:Design, develop, and maintain ETL/ELT pipelines using AWS Glue and Python.Build and manage data ingestion workflows through AWS Transfer Family (SFTP, FTP, FTPS).Develop and optimize data models and warehousing solutions on AWS Redshift.Write efficient, complex SQL queries for data transformation, validation, and reporting.Ensure high-quality, clean, and reliable data is delivered to downstream applications.Collaborate with cross-functional teams including analytics, product, and business stakeholders.Implement data quality checks, performance tuning, and automation within the data ecosystem.Ensure compliance with best practices in data security, scalability, and cloud architecture.Required Skill Set:Strong hands-on experience with AWS Glue (ETL jobs, crawlers, workflows).Proficiency in Python for data processing and automation.Experience working with AWS Transfer Family for data movement and SFTP solutions.Solid knowledge of AWS Redshift (data modeling, performance tuning, Spectrum, Workload Management).Advanced proficiency in SQL for querying large datasets.Good understanding of data warehousing, ETL concepts, and cloud data architecture.Experience with version control tools (Git) and CI/CD is an advantage.Strong analytical and problem-solving skills with attention to detail.

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period