Samba TV

Data Scientist (Knowledge Graph & Identity)

Posted: 21 hours ago

Boost Your Application

Stand out with our professional, ATS-friendly resume templates designed to get you noticed by recruiters.

Download Resume Templates

Job Description

Samba TV tracks streaming and broadcast video across the world with our proprietary data and technology. We are on a mission to fundamentally transform the viewing experience for everyone. Our data enables media companies to connect with audiences for new shows and movies, and enables advertisers to engage viewers and measure reach across all their devices. We have an amazing story with a unique perspective on culture formed by a global footprint of data and AI-driven insights.As a mid-level Data Scientist on Samba TV's Knowledge Graph & Identity team in Warsaw, you will own end-to-end delivery of significant data science projects with minimal guidance. You are a reliable, autonomous contributor with deep expertise in at least one of Samba's core domains - knowledge graphs, identity spine, measurement, or audience modeling - and the technical range to build production-ready solutions using modern ML and AI methodologies. You'll work closely with peers, product, and engineering, and play an active role in mentoring junior data scientists on the team.What You'll Do:Own end-to-end delivery of significant data science projects — from problem scoping and approach design through to production deployment, with a focus on knowledge graph and identity solutionsMake sound, independently-reasoned decisions on methodology, model selection, and evaluation; document them clearly in technical solution documents covering problem statement, approach, metrics, and timelineLead solution design for your own initiatives; break down complex epics into well-scoped user stories with clear acceptance criteria, adopting DataOps and MLOps best practices throughout — experiment tracking, pipeline orchestration, model monitoring, and reproducibilityBuild production-quality Python and PySpark code on Databricks — well-tested, documented, and reusable — and implement advanced ML and AI-powered workflows including entity resolution, probabilistic record linkage, embedding-based matching, semantic similarity, and LLM-augmented pipelinesDevelop and maintain reusable tools, libraries, and documentation that improve team efficiency and technical standards; conduct code reviews with constructive, specific feedback that raises the barMentor junior data scientists on technical execution, code quality, and career development; lead internal talks or workshops on knowledge graphs, identity, or ML topicsCollaborate cross-functionally with product, engineering, and operations — translate business requirements into technical specifications, partner with data engineering on scalable pipeline design, and participate in cross-functional design reviews and working groupsWho You Are: Bachelor's degree required in Statistics, Data Science, Computer Science, Mathematics or a related quantitative field; Master's strongly preferred3–5 years of hands-on data science experience with demonstrated ability to own and deliver complex, multi-sprint projects independentlyAdvanced Python with production-quality code, testing, and documentation; strong SQL and PySpark for billion-row datasetsDatabricks workflows, Delta Lake, and job orchestration; working knowledge of cloud platforms (AWS or GCP)Solid command of core ML — regression, classification, clustering, model evaluation, and experimental design — applied to complex, high-volume dataProficiency with MLOps practices: experiment tracking, pipeline orchestration (Airflow), and reproducible model deploymentExposure to modern AI methodologies: RAG systems, LLM-augmented models, vector databases, and semantic searchStrong communicator — able to translate technical work into clear documentation, user stories, and cross-functional conversationsDemonstrated ability to mentor junior data scientists and contribute to team standardsPreferred skills:Hands-on experience with knowledge graph construction, entity resolution, or semantic data modeling (RDF, OWL, SPARQL, or equivalent graph frameworks)Familiarity with probabilistic record linkage, identity graph approaches, or embedding-based entity matching at scaleExperience with causal inference methods (A/B testing, synthetic control, uplift modeling)Experience with deduplication, enrichment, or web-to-TV linkage problemsBackground in media, ad tech, or measurement — TV viewership (ACR/STB data), digital audience modeling, cross-platform measurement (linear + CTV/OTT), or identity resolution in privacy-constrained environmentsFamiliarity with the measurement and identity vendor landscape (Nielsen, Comscore, LiveRamp, The Trade DeskSamba TV is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.  We strive to empower connection with one another, reflect the communities we serve, and tackle meaningful projects that make a real impact.Samba TV may collect personal information directly from you, as a job applicant, Samba TV may also receive personal information from third parties, for example, in connection with a background, employment or reference check, in accordance with the applicable law. For further details, please see Samba's Applicant Privacy Policy. For residents of the EU , Samba Inc. is the data controller.

Job Application Tips

  • Tailor your resume to highlight relevant experience for this position
  • Write a compelling cover letter that addresses the specific requirements
  • Research the company culture and values before applying
  • Prepare examples of your work that demonstrate your skills
  • Follow up on your application after a reasonable time period

You May Also Be Interested In