Sunday, October 26, 2025

Job Description

Job Title: Senior Big Data SME (Subject Matter Expert) Location: Remote (or specify your office location) About Us: We are a dynamic team focused on building scalable, high-performance data platforms across the cloud ecosystem (Azure, AWS, GCP). As part of an exciting growth phase, we are looking for a Senior Big Data SME to join our team. This role will allow you to leverage your expertise in cloud technologies, big data, and data engineering to drive innovation and excellence in data architecture and solutions. Job Description: We are seeking a Senior Big Data Subject Matter Expert (SME) to help us architect, build, and optimize large-scale data platforms using leading cloud technologies and Big Data tools. As an SME, you will bring deep technical expertise and strategic vision to our cloud data initiatives, guiding the team on best practices, ensuring the successful delivery of data projects, and improving operational efficiency. You’ll work closely with engineers, architects, and leadership to ensure the implementation of cutting-edge solutions. Key Responsibilities: Architect and Design Scalable Data Platforms: Leverage your expertise in Azure, AWS, and GCP to design and build scalable, high-performance data platforms for analytics and real t ime processing. Cloud Technologies Expertise: Drive the adoption of cloud-native tools and services like Azure Synapse, AWS Glue, Redshift, and Google BigQuery, and ensure alignment with industry best practices. Big Data & Streaming Solutions: Provide hands-on support and guidance on big data technologies including Apache Spark (PySpark, Scala, Java), Hadoop (HDFS, YARN, Hive, Impala), and streaming frameworks like Kafka, Kinesis, and Flink. Data Pipelines & ETL Frameworks: Lead efforts in creating and optimizing ETL pipelines using tools like DBT, Apache Airflow, and cloud-native data orchestration tools (e.g., AWS Glue, Azure Data Factory). Containerization & Orchestration: Lead efforts in containerizing applications using Docker and orchestrating them with Kubernetes (AKS, EKS, GKE) for scalable and reliable deployment. Data Security & Governance: Ensure data security, governance, and compliance using IAM, RBAC, and data lineage tracking tools (Azure Purview, AWS Glue Data Catalog). CI/CD & DevOps: Contribute to the development of CI/CD pipelines with Terraform, Jenkins, Azure DevOps, and GitHub Actions to automate the deployment of cloud infrastructure and data workflows. Cost Optimization: Implement strategies for optimizing cloud resource utilization and reducing operational costs, particularly in storage (AWS S3, Azure Blob, GCS) and compute resources. Team Collaboration & Mentorship: Provide mentorship and guidance to junior engineers, and lead initiatives to build and maintain high-performing teams through knowledge-sharing sessions and best practice adoption. Requirements: Experience: 8+ years in Data Engineering or related roles, with a focus on cloud technologies and big data solutions. Technical Expertise: Deep knowledge of cloud platforms such as Azure, AWS, and GCP. Hands-on experience with Big Data technologies like Apache Spark, Hadoop, Kafka, and Flink. Solid understanding of ETL frameworks (DBT, Apache Airflow, AWS Glue, etc.). Expertise in containerization with Docker and Kubernetes (AKS, EKS, GKE). Proven experience in Data Warehousing & Modeling (Snowflake, Redshift, BigQuery, Synapse). Strong background in Data Security and Governance (IAM, RBAC, encryption, data lineage). Experience with CI/CD pipelines using Terraform, GitHub Actions, and Azure DevOps. Programming & Scripting Skills: Proficiency in Python, Bash, or PowerShell for automation tasks. Cloud Architecture: Experience designing hybrid/multi-cloud architectures to ensure high availability and fault tolerance across Azure, AWS, and GCP. Leadership & Mentorship: Proven ability to lead teams, mentor junior engineers, and collaborate effectively with cross-functional teams. Preferred Skills: Familiarity with Machine Learning pipelines and predictive analytics. Experience with Serverless Computing (AWS Lambda, Azure Functions, Google Cloud Functions).

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

Related Jobs