hackajob

Senior Data Engineer

Posted: 20 minutes ago

Job Description

hackajob is collaborating with Kainos to connect them with exceptional tech professionals for this role.MAIN PURPOSE OF THE ROLE & RESPONSIBILITIES IN THE BUSINESS:Experience of performance tuning Experience of data visualisation and complex data transformations Experience with steaming and event-processing architectures including technologies such as Kafka and change-data-capture (CDC) products Expertise in continuous improvement and sharing input on data best practice Practical experience with AI technologies, tools, processes and deliveryAs a Senior Data Engineer (Senior Associate) at Kainos, you will be responsible or designing and developing data processing and data persistence software components for solutions which handle data at scale. Working in agile teams, Senior Data Engineers provide strong development leadership and take responsibility for significant technical components of data systems . You will work within a multi-skilled agile team to design and develop large-scale data processing software to meet user needs in demanding production environments.Your Responsibilities Will IncludeWorking to develop data processing software primarily for deployment in Big Data technologies. The role encompasses the full software lifecycle including design, code, test and defect resolution. Working with Architects and Lead Engineers to ensure the software supports non-functional needs. Collaborating with colleagues to resolve implementation challenges and ensure code quality and maintainability remains high. Leads by example in code quality. Working with operations teams to ensure operational readiness Advising customers and managers on the estimated effort and technical implications of user stories and user journeys. Coaching and mentoring team members. Minimum (essential) RequirementsStrong software development experience in one of Java, Scala, or Python Software development experience with data-processing platforms from vendors such as AWS, Azure, GCP, Databricks. Experience of developing substantial components for large-scale data processing solutions and deploying into a production environment Proficient in SQL and SQL extensions for analytical queries Solid understanding of ETL/ELT data processing pipelines and design patterns Aware of key features and pitfalls of distributed data processing frameworks, data stores and data serialisation formats Able to write quality, testable code and has experience of automated testing Experience with Continuous Integration and Continuous Deployment techniques A Keen interest in AI Technologies Desirable RequirementsAttachments

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

You May Also Be Interested In