Zurich Insurance

Data Engineer - Zurich Asuransi Indonesia

Posted: 13 hours ago

Job Description

Job SummaryResponsible for identification, assessment and design of specific data engineering solutions, infrastructure and systems to support data-driven decision-making and analysis They enable the organization to effectively and efficiently manage, process, and analyze data, thereby supporting data-driven decision-making and helping to unlock valuable insights from dataKey Requirements3+ years’ experience with SparkSQL, Python and PySpark for data engineering workflowStrong proficiency in dimensional modeling and star schema design for analytical workloadsExperience implementing automated testing and CI/CD pipelines for data workflowsFamiliarity with GitHub operations and collaborative development practicesDemonstrated ability to optimize engineering workflow jobs for performance and cost efficiencyExperience with cloud data services and infrastructure (AWS, Azure, or GCP)Proficiency with IDE tools such as Visual Studio Code for efficient developmentExperience with Databricks platform will be a plusKey AccountabilitiesDesigns develops and validates data processes. They develop data pipelines and support their implementation ensuring data solutions align with business objectives and looking ahead to understand future technology options for the business. They serve as technical expert in a specific process or product area conducting process reviews and initiating change in order to contribute to continuous improvement of services to internal customers. efficiency and quality. Research externally primary data sources select relevant information continually evaluate key themes in technology make recommendations to inform policy andor product development in own area of IT.Key ResponsibilitiesDesign and implement ETL/ELT pipelines using Spark SQL and Python within Databricks Medallion architectureDevelop dimensional data models following star schema methodology with proper fact and dimension table design, SCD implementation, and optimization for analytical workloadsOptimize Spark SQL and DataFrame operations through appropriate partitioning strategies, clustering and join optimizations to maximize performance and minimize costsBuild comprehensive data quality frameworks with automated validation checks, statistical profiling, exception handling, and data reconciliation processesEstablish CI/CD pipelines incorporating version control, automated testing including but not limited to unit test, integration test, smoke test, etc.Implement data governance standards including row-level and column-level security policies for access controls and compliance requirementsCreate and maintain technical documentation including ERDs, schema specifications, data lineage diagrams, and metadata repositoriesWhy ZurichAt Zurich, we like to think outside the box and challenge the status quo. We take an optimistic approach by focusing on the positives and constantly asking What can go right?We are an equal opportunity employer who knows that each employee is unique - that’s what makes our team so great!Join us as we constantly explore new ways to protect our customers and the planet.Location(s): ID - Head Office - MT Haryono Remote working: HybridSchedule: Full TimeRecruiter name: Ayu Candra Sekar RurisaClosing date:

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

You May Also Be Interested In