IBM

Data Engineer

Posted: 2 hours ago

Boost Your Application

Stand out with our professional, ATS-friendly resume templates designed to get you noticed by recruiters.

Download Resume Templates

Job Description

IntroductionIn this role, you will work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we bring deep technical and industry expertise to public and private sector clients around the world. You will be part of a team that delivers high‑impact solutions and drives adoption of modern data and cloud technologies.Your Role And ResponsibilitiesThe successful candidate will design, build, and maintain scalable and reliable data pipelines and platforms used across analytics, AI, and business systems. You will collaborate with cross-functional teams to ensure data is accessible, high-quality, and aligned with business goals. Responsibilities include:Designing, developing, and optimizing data processing systems, including ETL/ELT pipelines and data orchestration workflowsBuilding and maintaining data pipelines for batch and real-time (streaming) use casesWorking with data scientists, software engineers, and business stakeholders to deliver high‑quality, production‑ready data solutionsImplementing and enforcing data quality, validation, and governance practicesEnsuring compliance with data security standards, access controls, and regulatory requirementsMonitoring data platform performance and implementing improvements to ensure reliability, scalability, and cost effectivenessContributing to standardization, automation, and best practices across data engineering teamsPreferred EducationBachelor's DegreeRequired Technical And Professional ExpertiseStrong Python skills for data processing, pipeline development, and automationHands‑on experience with Apache Spark / PySpark for large‑scale distributed data processingExperience with Databricks and cloud platforms (AWS or Azure), including Delta Lake and related data management toolsProven experience designing, developing, and maintaining scalable ETL/ELT pipelines and data platform componentsFamiliarity with building both batch and real‑time (streaming) data workflows, preferably with technologies like Kafka, Event Hubs, or KinesisExperience with DevOps practices and Infrastructure as Code (Terraform preferred)Understanding of data modeling, data warehousing concepts, and modern data architectures (e.g., Lakehouse)Preferred Technical And Professional ExperienceExperience building or integrating with LLM‑powered or AI‑driven solutionsFamiliarity with FastAPI and Pydantic for service developmentCertification in AWS, Azure, or DatabricksKnowledge of CI/CD pipelines for data workloadsMonthly salary for this position ranges from 3800 EUR gross to 5800 EUR gross. The final offer will be dependent on qualifications, professional experience and competencies.

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

You May Also Be Interested In