Skore is a Python library built by the scikit-learn founders. It evaluates and inspects your predictive models, structures your experiments, and stores results on your machine—so you can start without operating a separate tracking server. When you are ready, sync reports to Skore Hub. Open source.
Comet ML is close to other experiment platforms: strong on logging, dashboards, and collaboration, but light on first-class scikit-learn evaluation. Skore runs local-first—structured reports, metrics matched to your task, and guidance for classical ML—so you can nail the model story before you push metrics to Comet.
# Evaluate, then persist on disk (local project) from skore import Project, evaluate report = evaluate(estimator, X, y, splitter=5) project = Project("my-experiments", mode="local") project.put("baseline", report)
Comet emphasizes experiment logging, reproducibility, and cloud dashboards—similar in spirit to other MLOps trackers, with less depth for scikit-learn–native evaluation. Skore gives you structured reports and retrieval on your machine first—figures and tables tied to one evaluation—without hand-wiring every metric and plot. When your team wants a shared, data-science–oriented workspace, sync the same reports to Skore Hub. Use Comet for your run history; use Skore where classical ML evaluation is the job.
Skore is a Python library to evaluate and get insights from your predictive models. It structures and stores your experiments so you can easily retrieve them later—without rebuilding the fragile glue code that hosted experiment trackers depended on. Alongside Comet, it gives you a dedicated scikit-learn evaluation layer that runs local-first.
Evaluate one or several estimators with a holdout split or cross-validation and get a structured report from one entry point. You get an estimator report, a cross-validation report, or a comparison report—each with the same mental model so you can explore how your predictive models behave while you experiment.
Turn results into clear visualizations through rich displays, and pull the underlying tables when you need to dig deeper—so figures and numbers stay tied to the same evaluation instead of drifting across notebooks and slides.
Projects store and retrieve your reports so you can revisit insights or compare with new experiments later. Keep everything on disk locally, or use Skore Hub when you want exploration and search in a dedicated interface.
Skore selects appropriate metrics for your estimator and problem type so your evaluation matches what you actually optimized for—without extra configuration. That complements Comet, where you typically choose what to log yourself.
Shuffling time-series data, ignoring class imbalance, fitting preprocessors on the full dataset—these kinds of setup issues can inflate your scores. Skore surfaces actionable warnings in the evaluation flow so you catch them early, before you treat noisy metrics as ground truth.
By the scikit-learn core maintainers. Skore is built around scikit-learn–compatible estimators—not a generic adapter. Use skore.Project with mode="hub" when you want reports in Skore Hub’s UI (metrics, folds, figures in one place).
Scikit-learn is supported, but the experience is closer to generic metric logging than a dedicated evaluation product—often more setup than teams expect.
You didn't know it but we're actually pretty good friends. Join us and find out.
Call skore.evaluate with any scikit-learn–compatible estimator and you get a structured report. That report adapts to the kind of evaluation you want—holdout, cross-validation, or a comparison across models. Add Skore alongside Comet—no need to rip out your current stack.
import skore, skrub from sklearn.linear_model import Ridge model = skrub.tabular_pipeline(Ridge()) report = skore.evaluate(model, df, y, splitter=0.2) report
import skore, skrub from sklearn.linear_model import Ridge model = skrub.tabular_pipeline(Ridge()) report = skore.evaluate(model, df, y, splitter=5) report
import skore, skrub from sklearn.linear_model import Ridge from sklearn.ensemble import RandomForestRegressor models = [ skrub.tabular_pipeline(Ridge()), skrub.tabular_pipeline(RandomForestRegressor()), ] comparison_report = skore.evaluate(models, df, y, splitter=5) comparison_report
Most workflows start with projects on disk: evaluate, store runs, and reopen reports without running your own tracking server. When you want experiments in a shared, hosted workspace, point the same project at Skore Hub—reports sync there and open in a UI built for data scientists (metrics, folds, figures in one place). See skore.Project for local versus hub modes.
import skore # `report` from skore.evaluate(...) project = skore.Project(name="adult_census_survey", mode="hub") project.put("ridge", report)
Probabl offers Forward Deployed Engineering engagements for teams building or restructuring their ML workflow. If you want expert guidance on integrating Skore alongside Comet ML or Skore Hub, we can help.
Talk to the team