Job Description

Responsibilities:Design, develop, and maintain data pipelines and ETL processes using Python and YAML-based configurations.Manage and optimize relational and analytical databases, primarily PostgreSQL and ClickHouse (preferred).Deploy and manage containerized applications using Podman, and orchestrate services with Kubernetes (preferred).Apply advanced database practices such as partitioning, indexing, sharding, and housekeeping to ensure performance and scalabilityMonitor and ensure data integrity, consistency, and security in compliance with banking data governance standardsApply knowledge of networking fundamentals, including reverse proxy, SSH tunneling, and secure communication setupsCollaborate with data analysts, developers, and system administrators to deliver reliable and auditable data solutions for banking clients.Requirement:Minimum D3 degree in computer scienceMinimum 1 year of experience in the same fieldStrong knowledge of PostgreSQL administration and performance tuningProficiency in Python for data manipulation, automation, or ETL development.Strong background working with Linux-based environments.Experience:Exposure to banking data models, financial transaction systems, or core banking platforms (Temenos T24 Preferred)Experience with ClickHouse or other columnar analytical databasesWorking experience with Kubernetes for container orchestration.

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

You May Also Be Interested In