More Efficient UD(A)Fs with PySpark

With the release of Spark 2.3 implementing user defined functions with PySpark became a lot easier and faster. Unfortunately, there are still some rough edges when it comes to complex data types that need to be worked around.

more ...

Working efficiently with JupyterLab Notebooks

Being in the data science domain for quite some years, I have seen good Jupyter notebooks but also a lot of ugly. Notebooks can have the perfect balance between text, code and visualisations but how often do your notebooks rather get messy and incomprehensible after a while? Follow some simple best practices to work more efficiently with your notebooks.

more ...

Multiplicative LSTM for sequence-based Recommenders

Recommender Systems support the decision making processes of customers with personalized suggestions. They are widely used and influence the daily life of almost everyone in different domains like e-commerce, social media, or entertainment. Quite often the dimension of time plays a dominant role in the generation of a relevant recommendation.

more ...

Managing isolated Environments with PySpark

The Spark data processing platform becomes more and more important for data scientists using Python. PySpark - the official Python API for Spark - makes it easy to get started but managing applications and their dependencies in isolated environments is no easy task.

more ...