Job Description

Alphabridge is a dynamic tech company focused on empowering startups and mid-sized businesses with innovative solutions that drive growth and scalability. We specialise in providing cutting-edge software, strategic consulting, and technology infrastructure designed to streamline operations, enhance productivity, and foster sustainable expansion. With a commitment to delivering tailored solutions, Alphabridge helps businesses optimise their processes and succeed in a competitive digital landscape.Job Title: Data EngineerLocation: DHA Phase 5, LahoreTimings: 6:00 PM – 3:00 AMExperience Required: Minimum 3+ YearsWe are hiring a skilled Data Engineer to join our growing team. If you have a passion for building scalable data pipelines, transforming raw data into meaningful insights, and working with cutting-edge cloud and big data technologies, this role is for you.Core ResponsibilitiesDesign, build, and maintain scalable, cost-efficient data pipelines using Fivetran, Python, BigQuery, Microsoft Fabric, and Ab Initio.Integrate, enrich, and transform raw data from diverse sources into reliable, analytics-ready datasets.Develop, optimize, and monitor high-performance data workflows and cloud-native architectures.Create and manage Looker Views/Models and Fabric data experiences to support self-service analytics.Collaborate with cross-functional teams (Product, QA, Engineering, Data) in an Agile environment to enhance data-driven features and capabilities.Work with large-scale data processing frameworks such as Apache Spark, Beam, or similar.Implement and support data streaming architectures (Pub/Sub, Kafka).Required Skills & ExperienceBachelor’s degree in Computer Science, Information Systems, or equivalent practical experience.hands-on Ab Initio development experience.Strong experience in healthcare data (claims, enrollment, provider data) and compliance standards (HIPAA, HITECH).Proficiency with SQL, including complex queries, window functions, and nested subqueries.Strong experience with relational and NoSQL databases.Expertise in Unix/Linux shell scripting and job scheduling tools (Control-M, Autosys).Hands-on experience with cloud platforms (AWS, Azure, GCP).Experience with data warehousing and data lake environments, ideally BigQuery.Preferred SkillsExperience with other ETL tools (Informatica, Talend, Datastage).Knowledge of modern data technologies (Hadoop, Spark, data lakes).Familiarity with DevOps and CI/CD tools (Jenkins, Git, Terraform).Experience with Ab Initio cloud services.Working knowledge of BI platforms such as Looker.Strong analytical mindset with a focus on data accuracy, quality, and scalable design.Passion for learning emerging big data technologies and modern cloud architectures.

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

You May Also Be Interested In