MLOps World 2023

2 minute read

Here is the recording of the workshop.

Here is the proposal I submitted for this workshop.

Abstract

Serving a machine learning model is not particularly easy, especially if we add two or three models in parallel to the mix, in which case, a single model deployment recipe might start to crumble. To tackle the challenges around serving individual or multiple models in production, we have handy tools like MLServer and Seldon Core. The former is a python library that allows us create machine learning microservices with one or multiple models in the same service, and the latter allows us to build simple-to-complex inference graphs that can help us handle A/B testing, shadow and canary deployment, feature transformations, and model monitoring. If you want to learn how to use open-source tools to build microservices based on your different use cases and model recipes, come and join this hands-on workshop and get started with several of the key steps in the machine learning workflow as we walk through fun examples from the broader music industry.

What will the audience learn?

The core of the workshop will teach participants how to create machine learning microservices and inference graphs, and how to monitor the predictions made by these services. The main use case we’ll follow throughout the workshop comes from the music industry, so this will be a fun and content-rich 3 hours to go through.

Throughout the workshop, we will be building a creative ML platform in several incremental steps. In the first 50 minutes of the workshop, we will set up the user interface and the back-end our application, and then we’ll spin up the first model we will interact with. In the second 50-minute section, we will start adding different functionalities to our platform by running new machine learning models inside our inference server. Lastly, we’ll create different replicas of each model, develop an inference graph to come up with unique tunes, and conduct AB testing on our service to assess and evaluate the output of different models when compared with real songs.

Within our 3 hours together, we’ll have two 10- to 15-minute breaks and there will be plenty of exercises for participants to complete.

Why is this takeaway important?

The usage of machine learning in different applications is only increasing, and giving developers the flexibility to take their serving tools to any cloud provider of their choosing or to an on-premise option, will gives companies more freedom to experiment with and potentially adopt machine learning. On the other hand, giving data scientists and machine learning engineers a powerful tool to serve the models they create with ease, can increase productivity while reducing the time-to-value of a model after it has been trained. Hence, this workshop give new avenues to programmers using (or wanting to use) machine learning at their organisations.

What is unique about this, which can’t be found online?

This will be a hands-on workshop using audio data and different serving tools for machine learning. There are plenty of use-cases and examples out there with tabular, image, and text data, but not that many (if any) that touch on production machine learning with audio data. My hope is that participants will have fun learning how to serve machine learning models as they work with a different data modality.

Updated: