Tell the story of your experiments with Skore.
Ship with MLflow.
Skore is a Python library built by the scikit-learn founders. It evaluates and inspects your predictive models, structures your experiments, and stores results on your machine, so you can start without operating a separate tracking server. When you are ready, sync to Skore Hub or to MLflow.
MLflow is a strong MLOps choice for experiment tracking, the model registry, and moving models toward production safely. Skore sits upstream: interpret results, compare approaches, and make sound modeling choices before you register and deploy.
Skore is a Python library to evaluate and get insights from your predictive models. It structures and stores your experiments so you can easily retrieve them later, without rebuilding the fragile glue code that hosted experiment trackers depended on.
01
Reports for your experiments
Evaluate one or several estimators with a single train test split or cross validation and get a structured report from one entry point. You get an estimator report, a cross validation report, or a comparison report, each with the same mental model so you can explore how your predictive models behave while you experiment.
02
Get insights that matter
Turn results into clear visualizations through rich displays, and pull the underlying tables when you need to dig deeper, so figures and numbers stay tied to the same evaluation instead of drifting across notebooks and slides.
03
Store and retrieve your reports
Projects store and retrieve your reports so you can revisit insights or compare with new experiments later. Keep everything on disk locally, or use Skore Hub when you want exploration and search in a dedicated interface.
See it in practice
Build a report, then push it where your team works.
On the left, skore.evaluate follows the same patterns as in our getting started examples. On the right, persist the same report to Skore Hub or to an MLflow tracking server, your choice.
1
Create a report
import skore, skrub
from sklearn.linear_model import Ridge
model = skrub.tabular_pipeline(Ridge())
report = skore.evaluate(model, df, y, splitter=0.2)
report
import skore, skrub
from sklearn.linear_model import Ridge
model = skrub.tabular_pipeline(Ridge())
report = skore.evaluate(model, df, y, splitter=5)
report
import skore, skrub
from sklearn.linear_model import Ridge
from sklearn.ensemble import RandomForestRegressor
models = [
skrub.tabular_pipeline(Ridge()),
skrub.tabular_pipeline(RandomForestRegressor()),
]
comparison_report = skore.evaluate(models, df, y, splitter=5)
comparison_report
We are not replacing MLflow. We are complementing it.
Keep MLflow for the operational layer: tracking, registry, and the path to production. Add Skore for methodological depth on scikit-learn workflows, warnings about common pitfalls before you train, metrics that match your task, and fold level views of cross validation, so what you promote is backed by analysis you can explain, not only by numbers in a table.
Pitfall warnings before training
Task appropriate metrics
Fold level CV diagnostics
Explainable promotions
Hands on support
Need help? We have got hands on support.
Probabl offers Forward Deployed Engineering engagements for teams building or restructuring their ML workflow. If you want expert guidance on integrating Skore alongside MLflow or Skore Hub, we can help.