Data Engineer (m/f/x) - full-time

About us

Do you love stories? If so, please keep reading, because we certainly do. We believe the ability to tell stories is what makes us human. Joyn is your streaming app with over 65 live TV channels, exclusive previews, originals and collections. We understand Joyn as a partnership – an invitation to content-providers and users alike to make entertainment more meaningful and fun. Our app aggregates global and especially local content in a relevant way for Germany, both live TV and on-demand content. All kinds of stories and more to come, everyday.

We hire the best, because we need people that are as customer-focused as we are. We are looking for champions to help us further connect with our audience. It’s not a small or easy task, but it’s a fun and rewarding one. Do you think you’re up for it? Great. Then send us your application!


About the Job

At Joyn our users, services, and connected third parties produce about 1 billion events per day. Being able to collect and process that data is the fundamental prerequisit to get the insights we need to grow our business and improve our product. You together with your team of talented and passionate data engineers & scientists enable that by building a highly performant, scalable, and resilient data platform. We do that by living a “we build it, we ship it, we own it” culture where we constantly improve our tools, processes, and software stack. If you love what you do, want to make an impact, and want to embrace the challenge, Joyn is the right place to work. 

Opportunities to make an impact - what you do

  • Maintain and improve our data stack that consists of top-notch tools like BigQuerySnowflakedbtGreat ExpectationsDatabricks, Google Dataflow, and Prefect.
  • Use your know-how in distributed data processing, stream processing, software engineering, and data modeling to develop reliable and scalable cloud-based data processing systems in ScalaPython, and SQL.
  • Design and implement high-performant, scalable, and resilient data collection APIs processing tens of thousands requests per second.
  • Test, integrate, and deploy your code automatically using GitLab CI/CD and take care of running our services in production on GCP and/or AWS.
  • Apply software engineering best practices to implement processes, systems, and tools that help you and your team moving fast with high confidence.
  • Participate in technical design and architectural discussions with your- and other teams to solve real user issues.
  • You learn and strive to excel in areas you haven’t touched before and are open to sharing your knowledge and learnings with your team.
  • What we are looking for

  • Junior to experienced data engineers who want to learn and grow together with us.
  • A degree in computer science or related fields or a high level of practical experience working with data at scale.
  • Knowledge in designing RESTful APIs and bringing them into production.
  • Being familiar with topics like load balancing, proxy servers, and DNS.
  • Expert with Python or Scala (or alternatively Kotlin/Java/Groovy) which are the languages that drive our data pipelines and backend applications.
  • Thinking in Cloud-native design patterns i.e. auto-scaling, elasticity, container orchestration when architecting new services to handle both data at scale while optimizing costs.
  • Striving for “everything-as-code” and having working experience with CI/CD & test-automation.
  • Taking care of code quality, sharing your knowledge, and love to do code reviews.
  • Being comfortable working in a fast-paced and ever-changing environment that lets you grow.
  • Nice to have

  • Working experience with GCP services (BigQuery, Spanner, CloudRun, PubSub, Dataflow) and/or AWS services (S3, CloudFormation, Fargate, DynamoDB).
  • Working experience with a state-of-the-art ETL tool i.e. Airbyte, Fivetran, Prefect, Airflow, NiFi, Talend. 
  • Understanding of a stream processing framework i.e. KStreams, Dataflow, Apache Beam, or Flink.
  • Knowledge in distributed computing.