Senior Data Engineer (m/f/x) - full-time

Munich (Germany) or Remote (EU)

About us

Do you love stories? If so, please keep reading, because we certainly do. We believe the ability to tell stories is what makes us human. Joyn is your streaming app with over 65 live TV channels, exclusive previews, originals and collections. We understand Joyn as a partnership – an invitation to content-providers and users alike to make entertainment more meaningful and fun. Our app aggregates global and especially local content in a relevant way for Germany, both live TV and on-demand content. All kinds of stories and more to come, everyday.

We hire the best, because we need people that are as customer-focused as we are. We are looking for champions to help us further connect with our audience. It’s not a small or easy task, but it’s a fun and rewarding one. Do you think you’re up for it? Great. Then send us your application!

About the job

At Joyn you will work with the sizable data of our video streaming platform as well as data from external sources. Together with your team of talented and passionate big data engineers you are building a data platform in a highly collaborative and fast-paced environment. Both our recommendation engine and our heartbeat systems are part of this platform. The resilient and highly scalable production systems that you build will delight our customers by making sure they always see the most exciting content up front.

What you can expect

  • Use your strong background in distributed data processing, stream processing, software engineering design, and data modelling concepts to develop reliable and scalable production systems
  • Develop Scala components for our Spark based data processing pipeline, which powers our playout monitoring system
  • Enhance our BigQuery based Data Lake, orchestrate data jobs for our KPI reporting system
  • Create our user experience personalization engine, based on the machine learning prototypes of your data science colleagues
  • Design data collection APIs, develop backend services with Play, facilitate contract agreementsTest, integrate and deploy your code automatically using GitLab CI/CD and take care of running your services in production on Google Cloud Platform and/or AWS
  • Engineering is craftsmanship and you enjoy applying good engineering practises - implement processes, systems and tools to aid and you and your team team in the day to day work
  • Participate in technical design and architectural discussions within your team and with other teams solve some real consumer issues
  • Work in a team of passionate and talented engineers and data scientists that self-organise and continuously deliver value by embracing a lean and agile mindset
  • You learn and succeed in areas you haven’t touched before and you are open to coach each other in the team.
  • Your Profile

  • A degree in computer science or a very high level of practical experience in that fieldWorking experience in data engineering/science for several years, ideally in media or ecommerce
  • Deep knowledge in designing RESTful APIs and bringing them into production, and are familiar with topics like load balancing, proxy servers, and DNS
  • We are using Python for our recommendation services and Scala for our backend applications, so we are looking for professional experience in at least one of these languages (or alternatively Kotlin/Java/Groovy)
  • Thinking in Cloud native design patterns i.e. auto-scaling, elasticity, container orchestration when architecting new services to handle both data at scale while optimizing costs
  • Striving for “everything-as-code” and have working experience with CI/CD & test-automation
  • Taking care of code quality, sharing your knowledge, and loving to do code reviewsComfortable in working in a fast-paced and ever changing environment and let you grow
  • Nice to have

  • Working experience with GCP services (BigQuery, Spanner, CloudRun, PubSub, Dataflow) and/or AWS services (S3, CloudFormation, Fargate, DynamoDB)
  • Working experience with Spark, which we are using for stream processing. Alternatively you have experience with other stream processing frameworks such as KStreams, Flink or Beam
  • Knowledge in distributed computing