THRYVE

Data Specialist

Posted: 2 hours ago

Job Description

We are seeking a versatile and proactive Data Operations Engineer — a true jack of all trades — to bridge the gap between multiple specialist teams and technologies. This role is ideal for someone eager to learn, connect the dots across systems, and ensure the smooth operation of a complex data environment. The successful candidate will thrive in a collaborative setting, taking ownership of day-to-day system health, incident management, and cross-platform coordination. You will work closely with domain experts while ensuring all systems remain integrated, reliable, and well-documented.About the RoleKey Responsibilities:Operate daily health checks across Kestra, HVR, dltHub/Snowpipe, dbt, Snowflake Streams/Tasks, Honeydew, Looker, APIs, and AKS.Perform L1/L2 incident triage using documented runbooks and escalate unresolved issues to the responsible expert.Maintain and expand runbooks for recurring errors and known issues.Manage incident and service tickets within Azure DevOps Boards/Planner and report on system health and availability metrics.Support release validation and rollback execution during scheduled maintenance windows.Monitor data-quality alerts and schema drift; execute predefined fixes or escalate to the DAI/DPE lead when required.Act as the first point of contact for dashboard and API issues, coordinating resolution across teams.Ensure complete documentation for operational handover before go-live and contribute to Safety Gate reviews.Participate in root cause analysis and post-incident reviews, recommending process and automation improvements.ResponsibilitiesAccountable for operational uptime and L1/L2 incident handling across the data platform stack.Operates under the SRE (Site Reliability Engineering) framework defined by FlowOps, reporting SLI/SLO breaches and MTTR metrics.Supports the Governance Lead on access audits and RBAC (Role-Based Access Control) implementation.Collaborates with Infrastructure and Security teams on service alerts and AKS operations.No ownership for patch management or architectural decisions — these remain with FlowOps and the Principal Platform Architect.QualificationsProven experience in DataOps or cloud operations within a Snowflake-centric environment.Working knowledge of Kestra, dbt, Snowflake Tasks/Streams, Honeydew/Looker, and APIs.Familiarity with GitOps workflows (Azure DevOps YAML) and incident management processes.Strong SQL skills and basic Python knowledge for troubleshooting and automation scripting.Excellent analytical, documentation, and problem-solving skills with a structured approach.Fluent in English (written and spoken); German proficiency is an advantage but not required.Required SkillsProven experience in DataOps or cloud operations within a Snowflake-centric environment.Working knowledge of Kestra, dbt, Snowflake Tasks/Streams, Honeydew/Looker, and APIs.Familiarity with GitOps workflows (Azure DevOps YAML) and incident management processes.Strong SQL skills and basic Python knowledge for troubleshooting and automation scripting.Excellent analytical, documentation, and problem-solving skills with a structured approach.Fluent in English (written and spoken); German proficiency is an advantage but not required.Preferred SkillsCurious, collaborative, and eager to learn.Enjoys working across a wide range of tools and technologies, without being tied to one specialism.

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

You May Also Be Interested In