Deeplight AI

Data Engineer

Posted: 6 minutes ago

Job Description

DeepLight AI is a specialist AI and data consultancy with extensive experience implementing intelligent enterprise systems across multiple industries, with particular depth in financial services and banking. Our team combines deep expertise in data science, statistical modeling, AI/ML technologies, workflow automation, and systems integration with a practical understanding of complex business operations.The Data Engineer is responsible for designing, implementing, and optimising data pipelines and infrastructure to support our cutting-edge AI systems. The Data Engineer collaborates closely with our multidisciplinary team to ensure the efficient collection, storage, processing, and analysis of large-scale data, enabling us to unlock valuable insights and drive innovation across various domains.Responsibilities of the role:Design, build, and optimise scalable data solutions, primarily utilising the Lakehouse architecture to unify data warehousing and data lake capabilities. Advise stakeholders on the strategic choice between Data Warehouse, Data Lake, and Lakehouse architectures based on specific business needs, cost, and latency requirements. Design, develop, and maintain scalable and reliable data pipelines to ingest, transform, and load diverse datasets from various sources, including structured and unstructured data, streaming data, and real-time feeds. Implement standards and tooling to ensure ACID properties, schema evolution, and high data quality within the Lakehouse environment. Implement robust data governance frameworks (security, privacy, integrity, compliance, auditing). Continuously optimize data storage, compute resources, and query performance across the data platform to reduce costs and improve latency for both BI and ML workloads, leveraging techniques such as indexing, partitioning, and parallel processing. Develop and maintain CI/CD pipelines to automate the entire machine learning lifecycle, from data validation and model training to deployment and infrastructure provisioning. Deploy, manage, and scale machine learning models into production environments, utilizing MLOps principles for reliable and repeatable operations. Establish and manage monitoring systems to track model performance metrics, detect data drift (changes in input data), and model decay (degradation in prediction accuracy). Ensure rigorous version control and tracking for all components: code, datasets, and trained model artifacts (using tools like MLflow or similar). Create comprehensive documentation, including technical specifications, data flow diagrams, and operational procedures, to facilitate understanding, collaboration, and knowledge sharingRequirements Proven practical experience in designing, building, and optimising solutions using Data Lakehouse architectures (e.g., Databricks, Delta Lake). Strong hands-on experience with managing data ingestion, schema enforcement, ACID properties, and utilizing big data technologies/frameworks like Spark and Kafka. Expertise in data modeling, ETL/ELT processes, and data warehousing concepts. Proficiency in SQL and scripting languages (e.g., Python, Scala). Demonstrated practical experience implementing MLOps pipelines for production systems. This includes a solid understanding and implementation experience with MLOps principles: automation, governance, and monitoring of ML models throughout the entire lifecycle. Experience with CI/CD tools, containerization/orchestration technologies (e.g., Docker, Kubernetes), model serving frameworks (e.g., TensorFlow Serving, Sagemaker), and experiment tracking (e.g., MLflow). Experience with production monitoring tools to detect data drift or model decay. Strong hands-on experience with major cloud platforms (e.g., AWS, Azure, GCP) and familiarity with DevOps practices. Excellent analytical, problem-solving, and communication skills, with the ability to translate complex technical concepts into clear and actionable insights. Proven ability to work effectively in a fast-paced, collaborative environment, with a passion for innovation and continuous learning BenefitsBenefits & Growth Opportunities: Competitive salary and performance bonuses Comprehensive health insurance Professional development and certification support Opportunity to work on cutting-edge AI projects Flexible working arrangements Career advancement opportunities in a rapidly growing AI companyThis position offers a unique opportunity to shape the future of AI implementation while working with a talented team of professionals at the forefront of technological innovation. The successful candidate will play a crucial role in driving our company's success in delivering transformative AI solutions to our clients.

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

You May Also Be Interested In