For MLflow users

Tell the story of your experiments with Skore.
Ship with MLflow.

Skore is a Python library built by the scikit-learn founders. It evaluates and inspects your predictive models, structures your experiments, and stores results on your machine, so you can start without operating a separate tracking server. When you are ready, sync to Skore Hub or to MLflow.

MLflow is a strong MLOps choice for experiment tracking, the model registry, and moving models toward production safely. Skore sits upstream: interpret results, compare approaches, and make sound modeling choices before you register and deploy.

By the scikit-learn core maintainers Open source on GitHub MIT license Local first; optional Hub sync
evaluate_and_log_to_mlflow.py
# evaluate, then sync to MLflow when ready
from skore import Project, evaluate

report = evaluate(estimator, X, y, splitter=5)
project = Project(
    PROJECT,
    mode="mlflow",
    tracking_uri=TRACKING_URI,
)
project.put("baseline", report)
mode = "mlflow" skore.evaluate()
pip
$ pip install skore[mlflow]
At a glance

What Skore does / What MLflow does

Two libraries, two jobs. Skore makes the modeling choice. MLflow tracks the run and ships the artifact.

SKMethodological layer

What Skore does

  • Builds structured evaluation reports from scikit-learn compatible estimators.
  • Surfaces insights, figures, and tables tied to one evaluation, not scattered artifacts.
  • Stores and retrieves reports locally, or syncs them to Skore Hub or MLflow via Project.
  • Helps you compare models and interpret results before production handoff.
MLOperational layer

What MLflow does

  • Tracks experiments, parameters, and metrics across runs and environments.
  • Provides a model registry and versioning for promotion toward deployment.
  • Stores artifacts and integrates with common MLOps and serving stacks.
  • Offers a standard tracking_uri for teams and automation.
The handoff
skore.evaluate skore.Project(mode="mlflow") MLflow registry production
same artifact, four hands
What Skore does

Track your
data science.

Skore is a Python library to evaluate and get insights from your predictive models. It structures and stores your experiments so you can easily retrieve them later, without rebuilding the fragile glue code that hosted experiment trackers depended on.

  1. 01

    Reports for your experiments

    Evaluate one or several estimators with a single train test split or cross validation and get a structured report from one entry point. You get an estimator report, a cross validation report, or a comparison report, each with the same mental model so you can explore how your predictive models behave while you experiment.

  2. 02

    Get insights that matter

    Turn results into clear visualizations through rich displays, and pull the underlying tables when you need to dig deeper, so figures and numbers stay tied to the same evaluation instead of drifting across notebooks and slides.

  3. 03

    Store and retrieve your reports

    Projects store and retrieve your reports so you can revisit insights or compare with new experiments later. Keep everything on disk locally, or use Skore Hub when you want exploration and search in a dedicated interface.

See it in practice

Build a report,
then push it where your team works.

On the left, skore.evaluate follows the same patterns as in our getting started examples. On the right, persist the same report to Skore Hub or to an MLflow tracking server, your choice.

1

Create a report

import skore, skrub
from sklearn.linear_model import Ridge

model = skrub.tabular_pipeline(Ridge())
report = skore.evaluate(model, df, y, splitter=0.2)
report
2

Push to Skore Hub or MLflow

from skore import Project

# `report` from skore.evaluate(...)
project = Project(
    PROJECT,
    mode="mlflow",
    tracking_uri=TRACKING_URI,
)
project.put("baseline", report)

We are not replacing MLflow.
We are complementing it.

Keep MLflow for the operational layer: tracking, registry, and the path to production. Add Skore for methodological depth on scikit-learn workflows, warnings about common pitfalls before you train, metrics that match your task, and fold level views of cross validation, so what you promote is backed by analysis you can explain, not only by numbers in a table.

  • Pitfall warnings before training
  • Task appropriate metrics
  • Fold level CV diagnostics
  • Explainable promotions
Hands on support

Need help? We have got
hands on support.

Probabl offers Forward Deployed Engineering engagements for teams building or restructuring their ML workflow. If you want expert guidance on integrating Skore alongside MLflow or Skore Hub, we can help.

Engagement shape
AuditMap your current pipeline. Surface methodological risk.
IntegrateWire Skore alongside MLflow with your existing patterns.
CoachPair with your team on the first 3 production bound reports.