Job Description
Key Responsibilities1. Design and build batch/real-time data warehouses to support overseas e-commerce growth2. Develop efficient ETL pipelines to optimize data processing performance and ensure data quality/stability3. Build unified data middleware layer to reduce business data development costs and improve service reusability4. Collaborate with business teams to identify core metrics and data requirements, delivering actionable data solutions5. Discover data insights through collaboration with business owner6. Participate in AI-driven efficiency enhancement initiatives, collaborating on machine learning algorithm development, feature engineering, and data processing workflowsEssential Qualifications:1. Proficiency in Korean, at the same time Chinese or English at business communication level for international team collaboration2. 3+ years of data engineering experience with proven track record in data platform/data warehouse projects3. Proficient in Hadoop ecosystem (Hive, Kafka, Spark, Flink), SQL, and at least one programming language (Python/Java/Scala)4. Solid understanding of data warehouse modeling (dimensional modeling, star/snowflake schemas) and ETL performance optimization5. Strong cross-department collaboration skills to translate business requirements into technical solutions6. Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related fields7. Preferred Qualifications: Experience with AI model development workflows and hands-on experience in machine learning/deep learning projects
Job Application Tips
- Tailor your resume to highlight relevant experience for this position
- Write a compelling cover letter that addresses the specific requirements
- Research the company culture and values before applying
- Prepare examples of your work that demonstrate your skills
- Follow up on your application after a reasonable time period