Experience: 5 - 20 YearsLocation: USA - GlendaleMust-Have5+ years of data engineering experience, specifically developing large-scale data pipelinesExperience with Spark, Airflow, Databricks or Snowflake, SQL, PythonLocation: Glendale, CA – Onsite 4 days a weekThe CompanyHeadquartered in Los Angeles, this leader in the Entertainment & Media space is focused on delivering world-class stories and experiences to its global audience. To offer the best entertainment experiences, their technology teams focus on continued innovation and utilization of cutting-edge technology. Platform / StackYou will work with technologies that include Python, AWS, Spark, Snowflake, Databricks, and Airflow.
What You'll Do As a Sr Data EngineerContribute to maintaining, updating, and expanding existing Core Data platform data pipelinesBuild and maintain APIs to expose data to downstream applicationsDevelop real-time streaming data pipelinesUtilize a tech stack including Airflow, Spark, Databricks, Delta Lake, and SnowflakeCollaborate with product managers, architects, and other engineers to drive the success of the Core Data platformContribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and moreEnsure high operational efficiency and quality of the Core Data platform datasets to meet SLAs and project reliability and accuracy expectationsQualificationsYou could be a great fit if you have:
5+ years of data engineering experience developing large data pipelinesProficiency in at least one major programming language (e. g. , Python, Java, Scala)Hands-on production environment experience with distributed processing systems such as SparkHands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelinesExperience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query)Experience in developing APIs with GraphQLAdvanced understanding of OLTP vs OLAP environmentsStrong background in distributed data processing, software engineering of data services, or data modelingSkills:
graphql,spark,aws,data modeling,delta lake,data,pipelines,database,sql,snowflake,data engineering,api development,technology,oltp,java,airflow,scala,apache spark,core data,python,databricks,olap,large-scale data pipelines
Customize your resume to highlight skills and experiences relevant to this specific position.
Learn about the company's mission, values, products, and recent news before your interview.
Ensure your LinkedIn profile is complete, professional, and matches your resume information.
Prepare thoughtful questions to ask about team dynamics, growth opportunities, and company culture.