Data Engineer

Remote Full time
Posted Aug 24, 2025
🔍 Find Similar Jobs

Job Details

Employment Type

Full time

Salary

0.00 USD

Valid Through

Sep 23, 2025

Job Description

We are seeking a Senior Data Engineer to support the ingestion, processing, and synchronization of data across our analytics platform. This role focuses on using Python Notebooks to ingest data via APIs into Microsoft Fabric's Data Lake and Data Warehouse, with some data being synced to a Synapse Analytics database for broader reporting needs.

The ideal candidate will have hands-on experience working with API-based data ingestion and modern data architectures, including implementing Medallion layer architecture (Bronze, Silver, Gold) for optimal data organization and quality management, with bonus points for exposure to marketing APIs like Google Ads, Google Business Profile, and Google Analytics 4. This is a remote position.

We welcome applicants globally, but this role has a preference for LATAM candidates to ensure smoother collaboration with our existing teamKey ResponsibilitiesBuild and maintain Python Notebooks to ingest data from third-party APIsDesign and implement Medallion layer architecture (Bronze, Silver, Gold) for structured data organization and progressive data refinementStore and manage data within Microsoft Fabric's Data Lake and Warehouse using delta parquet file formatsSet up data pipelines and sync key datasets to Azure Synapse AnalyticsDevelop PySpark-based data transformation processes across Bronze, Silver, and Gold layersCollaborate with developers, analysts, and stakeholders to ensure data availability and accuracyMonitor, test, and optimize data flows for reliability and performanceDocument processes and contribute to best practices for data ingestion and transformationTech Stack You'll UseIngestion & Processing:

Python (Notebooks)PySparkStorage & Warehousing: Microsoft Fabric Data Lake & Data WarehouseDelta Parquet filesSync & Reporting: Azure Synapse AnalyticsCloud & Tooling: Azure Data Factory, Azure DevOpsRequirementsStrong experience with Python for data ingestion and transformationProficiency with PySpark for large-scale data processing;Proficiency in working with RESTful APIs and handling large datasets;Experience with Microsoft Fabric or similar modern data platforms;Understanding of Medallion architecture (Bronze, Silver, Gold layers) and data lakehouse concepts;Experience working with Delta Lake and parquet file formats;Understanding of data warehousing concepts and performance tuning;Familiarity with cloud-based workflows, especially within the Azure ecosystem.

Nice to HaveExperience with marketing APIs such as Google Ads or Google Analytics 4;Familiarity with Azure Synapse and Data Factory pipeline design;Understanding of data modeling for analytics and reporting use cases;Experience with AI coding tools;Experience with Fivetran, Aribyte, and Riverly.

Apply Now

You'll be redirected to the company's application portal

Application Success Tips

Resume Tailoring

Customize your resume to highlight skills and experiences relevant to this specific position.

Company Research

Learn about the company's mission, values, products, and recent news before your interview.

Profile Optimization

Ensure your LinkedIn profile is complete, professional, and matches your resume information.

Interview Preparation

Prepare thoughtful questions to ask about team dynamics, growth opportunities, and company culture.

Back to Job Listings