Saturday, October 25, 2025
Sparq

Senior Databricks Engineer

Posted: 1 hours ago

Job Description

At Sparq, we help companies solve the right problems—not just build more technology.We’re a modern product engineering partner blending strategy, craftsmanship, and speed to help organizations modernize confidently in the age of AI. From data ecosystems to digital products and AI acceleration, we turn complexity into clarity and ideas into impact.If you’re driven to build what’s next, lead with empathy, and deliver excellence without ego, you’ll feel right at home at Sparq.Why You Will Enjoy Mondays AgainOpportunity to collaborate with a diverse group of colleagues in a fun, creative environmentProgressive career journey and opportunity for advancementContinuous development through training, mentorship and certification programsExposure to modern technologies across various industries in an agile environmentRemote workA Day In The LifeDesign, develop, and optimize scalable data pipelines and ETL processes using Databricks, PySpark, and related big data technologies.Partner with data architects and analysts to model datasets that support analytics, machine learning, and operational reporting.Implement and maintain data lakehouse architectures, ensuring consistency, reliability, and performance across environments.Collaborate cross-functionally with cloud infrastructure teams to integrate Databricks with Azure or AWS services (e.g., S3, ADLS, Delta Lake).Automate data ingestion, transformation, and quality checks using Databricks Workflows and Delta Live Tables.Monitor and tune job performance, optimizing cluster configurations and managing cost efficiency.Develop and maintain CI/CD pipelines for Databricks code deployments using Git, Azure DevOps, or Jenkins.Troubleshoot production issues, ensuring robust error handling and system resilience.Contribute to evolving best practices for data governance, security, and compliance across the organization.Mentor junior engineers through code reviews, design discussions, and technical knowledge sharing.What It TakesStrong proficiency in PySpark, SQL, and distributed data processing frameworks.Hands-on experience building and maintaining ETL/ELT pipelines at scale.Deep understanding of Delta Lake, data partitioning, and performance tuning within Databricks.Experience deploying and managing Databricks in Azure or AWS cloud ecosystems.Solid grasp of data modeling, data warehousing, and data lakehouse design patterns.Working knowledge of CI/CD, version control, and modern DevOps practices for data platforms.Familiarity with tools like DBT, Airflow, or Data Factory for orchestration and pipeline management.Strong analytical mindset with a focus on performance, scalability, and reliability.Excellent communication skills and ability to collaborate in fast-paced, cross-functional environments.Equal Employment Opportunity Policy: Sparq is proud to offer equal employment opportunity without regard to age, color, disability, gender, gender identity, genetic information, marital status, military status, national origin, race, religion, sexual orientation, veteran status, or any other legally protected characteristic.

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

Related Jobs