Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Apply to join our preview for Managed Kubeflow on Azure!

Learn more and sign up.

Kubeflow

What is Kubeflow?

Kubeflow is the open source machine learning MLOps platform

Kubeflow enables you to develop and deploy machine learning models at any scale. It is a cloud-native application that runs on any cloud.

Charmed Kubeflow is Canonical’s distribution. It is secure and seamlessly integrated with other leading tools such as MLflow.


Try out Charmed Kubeflow Read our MLOps toolkit

Contributors to Kubeflow


What's inside Kubeflow?


Kubeflow dashboard

Central dashboard with multi-user isolation provides a platform for data scientists and engineers to leverage K8s to develop, deploy and monitor their models in production.

JupyterLab and VSCode

With Kubeflow users can spin-up Jupyter notebook servers but also VSCode directly from the dashboard, allocating the right storage, CPUs and GPUs.

ML libraries & frameworks

Kubeflow is compatible with your choice of data science libraries and frameworks. TensorFlow, PyTorch, MXNet, XGBoost, scikit-learn and more.


Kubeflow Pipelines

Automate your ML workflow into pipelines by containerizing steps as pipeline components and defining inputs, outputs, parameters and generated artifacts. Learn more ›

Experiments, Runs and Recurring Runs

Experiments, groups of Kubeflow Pipeline 'Runs' and Recurring Runs, allow you to find the right parameters for your model, compare and replicate results.

Hyperparameter tuning / AutoML

Kubeflow includes Katib for hyperparameter tuning. Katib runs pipelines with different hyperparameters (e.g. learning rate, # of hidden layers) optimising for the best ML model.


KServe for inference serving

KServe is a multi-framework model deployment tool with serverless inferencing, canary roll-outs, pre & post-processing and explainability. Learn more ›

The integrations you need

Charmed Kubeflow integrates with MLFlow for model registry, staging, and monitoring in production, Seldon Core for inference serving and Apache Spark for parallel data processing.

More...

Save, compare and share generated artifacts - models, images, plots.


MLOps at any scale

Enterprise-ready Charmed Kubeflow, the fully supported MLOps platform for any cloud, is validate and certified on high-end AI hardware, such as NVIDIA DGX.

A complete solution for sophisticated data science labs. Upgrades and security updates - all supported in the free, open source distribution.


Get the whitepaper

Why MLOps?

Bringing AI solutions to market can involve many steps: data pre-processing, training, model deployment or inference serving at scale... The list of tasks is complex and keeping them in a set of notebooks or scripts is hard to maintain, share and collaborate on, leading to inefficient processes.

Google describes that only about 20% of the effort and code required to bring AI systems to production is the development of ML code, while the remaining is operations. Standardizing ops in your ML workflows can hence greatly decrease time-to-market and costs for your AI solutions.


Read more about MLOps
MLOps diagram

Who uses Kubeflow?

Thousands of companies have chosen Kubeflow for their AI/ML stack.

From research institutions like CERN, to transport and logistics companies - Uber, Lyft, GoJek - to financial and media industries with Spotify, Bloomberg, Shopify and PayPal.

Forward-looking enterprises are using Kubeflow to empower their data scientists.


Read our case study with the University of Tasmania

Kubeflow resources

  • Read more about Charmed MLflow’s capabilities and follow our tutorials.

  • Understand the differences between the most famous open source solutions.

  • Learn how to take models to production using open source MLOps platforms.


Get started today

Try-out Kubeflow on your Kubernetes environment. Or on MicroK8s - Zero-ops Kubernetes with high availability.

Deploy locally on your workstation, cloud VM or on-prem server.


Try Kubeflow on MicroK8s Contact us now