Project Foundry

Senior Data Engineer (Microsoft Fabric) - Remote

Posted: 3 days ago

Job Description

Senior Data Engineer (Microsoft Fabric)PurposeOwn the build of our Microsoft Fabric data platform-ingesting, modelling, orchestrating, and optimising data products that power enterprise reporting, analytics, and self-service BI.What you'll doDesign and implement robust ingestion (batch/stream/CDC) into OneLake/Lakehouse using reusable, metadata-driven patterns.Build medallion (bronze-silver-gold) pipelines with Fabric (Data Factory/Dataflows Gen2, Notebooks, Spark/PySpark, T-SQL).Implement DirectLake/semantic models for Power BI; tune for performance and cost.Productionise pipelines with CI/CD (Azure DevOps or GitHub), environments, approvals, and automated tests.Add lineage, classification, and access policies via Microsoft Purview and enforce governance-by-default.Instrument platforms with monitoring/alerting, retry/idempotency, and data quality checks.Profile data, improve models, and resolve bottlenecks with engineering fixes (partitioning, caching, file formats, indexing).Partner with analysts and architects to turn requirements into shippable data products and clear documentation.Drive continuous improvement: templates, libraries, and runbooks that reduce lead time and toil.Must-have experience5+ years in data engineering with recent, hands-on Microsoft Fabric (or Synapse + staged Fabric migration).Strong PySpark/Python and T-SQL for transformations, orchestration, and optimisation.Proven CI/CD for data (git, branching, PRs, pipelines, IaC basics-Bicep/Terraform/ARM helpful).Built and supported production medallion/lakehouse architectures at enterprise scale.Practical cost and performance tuning (capacities, partitions/distribution, file sizes, DirectLake model shaping).Governance & lineage in practice (Purview, role-based security, PII handling, auditability).Monitoring/alerting with metrics and logs; designing for reliability (idempotency, backfills, SLAs/SLOs).Nice to haveAzure services around Fabric (Event Hubs, Data Explorer, Functions, Key Vault).Data modelling depth (star/snowflake, Data Vault), DAX familiarity for modelers.Test frameworks for data (e.g., dbt tests, Great Expectations, pytest).Certifications (DP-203, DP-600, AZ-305) and experience migrating Synapse → Fabric.

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

Related Jobs