Kalpas Innovations Pvt Ltd

Python Engineer — Backend & Data Aggregations

Posted: 3 minutes ago

Job Description

Contract duration : 6 monthsRole Overview : We’re looking for a Python Engineer (2–4 years) who’s strong in backend development and has hands-on experience implementing aggregation and data computation use cases — such as device-level rollups, metrics computation, time-based summaries, or multi-source joins.You’ll work closely with platform, data, and product teams to design efficient aggregation logic and APIs that serve real-time and historical analytics, and you’ll help make Condense’s data platform more intelligent and scalable.Key Responsibilities : · Design and implement data aggregation logic for device, customer, or time-window-based metrics using Python.· Build clean, maintainable backend services or microservices that perform aggregations and expose results through APIs or data sinks.· Work with internal teams to translate business or analytics needs into efficient aggregation pipelines.· Optimize data handling — caching, indexing, and computation efficiency for large-scale telemetry data.· Collaborate with DevOps and data teams to integrate with databases, message queues, or streaming systems.· Write high-quality, tested, and observable code — ensuring performance and reliability in production.· Contribute to design discussions, reviews, and documentation across backend and data infrastructure components.Required Qualifications : · 2–4 years of professional experience as a Python Developer / Backend Engineer.· Strong proficiency with Python (async programming, data structures, I/O, concurrency).· Experience with data aggregation, metrics computation, or analytics workflows (batch or incremental).· Sound understanding of REST APIs, microservice architecture, and database design (SQL/NoSQL).· Familiarity with cloud-native development and containerized deployment (Docker, Kubernetes).· Hands-on with data access and transformation using libraries like pandas, SQLAlchemy, or FastAPI/Flask for backend services.· Excellent debugging, profiling, and optimization skills.Good to Have : · Exposure to real-time data pipelines (Kafka, Kinesis, Pulsar, etc.) or streaming frameworks (Kafka Streams, ksqlDB, Faust).· Experience with time-series databases or analytics stores (ClickHouse, Timescale, Druid, etc.).· Understanding of event-driven or stateful aggregation patterns (tumbling/sliding windows, deduplication).· Familiarity with CI/CD, observability tools (Prometheus, Grafana), and monitoring best practices.· Experience working in IoT, mobility, or telemetry-heavy product environments.What Success Looks Like : · You deliver robust, scalable aggregation logic that enables downstream analytics and dashboards.· Code is clean, performant, and maintainable, following engineering best practices.· Aggregation jobs and APIs are well-monitored and observable, enabling smooth production operation.· You work effectively across teams — platform, DevOps, and product — to deliver data-backed insights faster.· You continuously learn and adopt best practices from the Python and data engineering ecosystem

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

You May Also Be Interested In