For W&B users

W&B does training monitoring well. Skore adds what it's missing for scikit‑learn.

Skore is a Python library built by the scikit‑learn founders. It evaluates and inspects your predictive models, structures your experiments, and stores results on your machine, so you can start without operating a separate tracking server. When you are ready, sync reports to Skore Hub. Open source.

By the scikit‑learn core maintainers Open source on GitHub Open core, MIT license Local‑first; optional Skore Hub sync

evaluate_and_store_local.py
# Evaluate, then persist on disk (local project)
from skore import Project, evaluate

report = evaluate(estimator, X, y, splitter=5)
project = Project("my-experiments", mode="local")
project.put("baseline", report)
pippip install skore
condaconda install -c conda-forge skore
Different jobs, different surfaces

W&B is good at what it does. This isn't about replacing it. It's about what Skore adds for scikit‑learn.

W&B emphasizes training monitoring and flexible cloud dashboarding. For classical ML with scikit‑learn, Skore gives you structured evaluation and retrieval on your machine first (reports, visuals, and tables tied to one evaluation), without rebuilding the glue code hosted trackers depended on. When your team wants a shared, data‑science‑oriented workspace, sync the same reports to Skore Hub.

Use W&B where it shines. Use Skore where scikit‑learn evaluation is the job.

What Skore adds

Track your data science.

Skore is a Python library to evaluate and get insights from your predictive models. It structures and stores your experiments so you can easily retrieve them later, without rebuilding the fragile glue code that hosted experiment trackers depended on. Alongside W&B, it gives you a dedicated scikit‑learn evaluation layer that runs local‑first.

01

Reports for your experiments

Evaluate one or several estimators with a holdout split or cross‑validation and get a structured report from one entry point: estimator, cross‑validation, or comparison, with the same mental model.

02

Get insights that matter

Turn results into clear visualizations through rich displays, and pull the underlying tables when you need to dig deeper, so figures and numbers stay tied to the same evaluation.

03

Store and retrieve your reports

Projects store and retrieve your reports so you can revisit insights or compare with new experiments later. Keep everything on disk locally, or use Skore Hub for a dedicated interface.

04

Metrics that fit your task

Skore selects appropriate metrics for your estimator and problem type so your evaluation matches what you actually optimized for, without extra configuration.

05

Methodological warnings

Shuffling time‑series data, ignoring class imbalance, fitting preprocessors on the full dataset: Skore surfaces actionable warnings before you treat noisy metrics as ground truth.

06

Built for scikit‑learn workflows

By the scikit‑learn core maintainers. Skore is built around scikit‑learn‑compatible estimators, not a generic adapter. Use skore.Project with mode="hub" when you want reports in Skore Hub.

Side by side

Skore and W&B, different tools, different jobs.

WW&B

Weights & Biases focuses on

Typical setup centers on cloud logging and dashboards; scikit‑learn use cases often still mean manual metric choices and custom panels.

  • Training monitoring for deep learning and LLMs
  • Flexible loss and metric logging
  • Cloud‑based dashboarding and team collaboration
  • Interactive visualisations and flexible reporting
SSKORE

Skore focuses on

You didn't know it but we're actually pretty good friends. Join us and find out.

  • Evaluate scikit‑learn‑compatible models, local‑first, no tracking server required
  • Structured reports, editorial guidance, not blank dashboards
  • Auto metric selection and methodological warnings
  • Projects to store and retrieve reports; optional Skore Hub sync
See it in practice

Less scaffolding. More signal.

Call skore.evaluate with any scikit‑learn‑compatible estimator and you get a structured report. That report adapts to the kind of evaluation you want: holdout, cross‑validation, or a comparison across models.

Add Skore alongside W&B, no need to rip out your current stack.

import skore, skrub from sklearn.linear_model import Ridge model = skrub.tabular_pipeline(Ridge()) report = skore.evaluate(model, df, y, splitter=0.2) report
import skore, skrub from sklearn.linear_model import Ridge model = skrub.tabular_pipeline(Ridge()) report = skore.evaluate(model, df, y, splitter=5) report
import skore, skrub from sklearn.linear_model import Ridge from sklearn.ensemble import RandomForestRegressor models = [ skrub.tabular_pipeline(Ridge()), skrub.tabular_pipeline(RandomForestRegressor()), ] comparison_report = skore.evaluate(models, df, y, splitter=5) comparison_report
Skore Hub

Store reports. Open them where data scientists look.

Most workflows start with projects on disk: evaluate, store runs, and reopen reports without running your own tracking server. When you want experiments in a shared, hosted workspace, point the same project at Skore Hub, reports sync there and open in a UI built for data scientists.

push_to_hub.py
import skore

# `report` from skore.evaluate(...)
project = skore.Project(
    name="adult_census_survey",
    mode="hub",
)
project.put("ridge", report)
Forward‑deployed engineering

Need help? We've got hands‑on support.

Probabl offers Forward Deployed Engineering engagements for teams building or restructuring their ML workflow. If you want expert guidance on integrating Skore alongside Weights & Biases or Skore Hub, we can help.

Talk to the team