Wednesday, October 29, 2025
Botsford Associates

Python Developer

Posted: 13 hours ago

Job Description

Project Overview:Botsford Associates is supporting the Chief Data Office of a leading financial institution on a strategic initiative to develop a centralized information management platform. This platform will enhance the bank's ability to "Know Your Data, Govern Your Data, and Use Your Data” by establishing a common business language, clarifying ownership and accountability, and enabling greater transparency and trust in enterprise data.Role Summary:We are seeking a Senior Data Engineer to design and implement scalable, reusable data pipelines and foundational frameworks that will power this enterprise-wide transformation. This individual will work across business and technology stakeholders to ensure clean, trusted, and ready-to-use data is ingested and maintained within the new platform, while adhering to data governance standards and technical best practices.Key Responsibilities:Develop scalable ETL pipelines using Python to support ingestion, transformation, standardization, and delivery of clean data to the centralized platform.Integrate data from multiple sources, including S3-based storage systems and external/internal REST APIs.Build reusable components for data validation, control logging, exception handling, and quality checks.Design service layers to enable seamless data consumption by downstream analytics and business systems.Collaborate with data stewards and business analysts to understand critical data elements and ensure proper lineage and metadata integration.Optimize data structures (e.g., Parquet) and pipeline performance across large-scale datasets.Contribute to the onboarding and configuration of enterprise data tooling (e.g., data catalogs, lineage tracking, data quality monitoring).Ensure compliance with enterprise architecture and data governance policies throughout the development lifecycle.Required Skills & ExperienceExpert-level experience in core Python development, particularly for back-end and data engineering use cases.Proficiency with data ingestion and integration, including extracting data from S3 storage systems and REST APIs.Solid experience in ETL pipeline design for data transformation, harmonization, and standardization.Background in building service layers/APIs to expose data to consumer applications.Familiarity with Parquet format and optimizing it for performance in large-scale data environments.Knowledge of metadata integration, data lineage, and governance frameworks is a strong asset.

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

Related Jobs