For former Neptune users

You were using Neptune. Here is what comes next.

Neptune is gone. Whatever you migrated to, or wherever you are still looking, Skore is worth ten minutes of your time.

Skore is a Python library built by the scikit-learn founders. It evaluates and inspects your predictive models, structures your experiments, and stores results on your machine, so you can start without operating a separate tracking server. Sync to Skore Hub when you are ready.

By the scikit-learn core maintainers Open source on GitHub Open core, MIT license Local-first, optional Skore Hub sync
Get started in 30 seconds

Pick your package manager. The library works the same either way.

pip install skore
What Skore does

Track your data science.

Skore is a Python library to evaluate and get insights from your predictive models. It structures and stores your experiments so you can easily retrieve them later, without rebuilding the fragile glue code that hosted experiment trackers depended on.

01

Reports for your experiments

Evaluate one or several estimators with a single train-test split or cross-validation and get a structured report from one entry point. Estimator report, cross-validation report, comparison report, all share the same mental model so you can explore how your predictive models behave while you experiment.

02

Get insights that matter

Turn results into clear visualizations through rich displays, and pull the underlying tables when you need to dig deeper, so figures and numbers stay tied to the same evaluation instead of drifting across notebooks and slides.

03

Store and retrieve your reports

Projects store and retrieve your reports so you can revisit insights or compare with new experiments later. Keep everything on disk locally, or use Skore Hub when you want exploration and search in a dedicated interface.

See it in practice

Less scaffolding.
More signal.

Call skore.evaluate with any scikit-learn compatible estimator and you get a structured report. That report adapts to the kind of evaluation you want, holdout, cross-validation, or a comparison across models.

Example

          
CrossValidationReport, 5 folds
Skore Hub

Store reports.
Open them where data scientists look.

Most workflows start with projects on disk: evaluate, store runs, and reopen reports without running your own tracking server. When you want experiments in a shared, hosted workspace, point the same project at Skore Hub, reports sync there and open in a UI built for data scientists (metrics, folds, figures in one place).

See skore.Project for local versus hub modes.

Push a report to the hub
import skore

# `report` from skore.evaluate(...)
project = skore.Project(name="adult_census_survey",
                        mode="hub")
project.put("ridge", report)
adult_census_survey, ridge 5-fold CV, synced 2s ago
Live
Works with your current stack

Already moved on from Neptune?
Good.

Whatever you migrated to, Skore adds the evaluation and diagnostic layer that those tools do not include. No ripping anything out. Just add Skore.

MLflowNative

Native integration

Push Skore reports directly into MLflow runs. Use pip install skore[mlflow] and log structured reports alongside your existing experiments.

Weights and BiasesComplement

Adds what W&B misses

W&B focuses on training monitoring and cloud dashboarding. Skore adds structured evaluation that runs locally first, with the option to sync reports to Skore Hub when you want them in a shared, data-science oriented workspace.

CometComplement

Adds methodological depth

Comet logs individual artifacts you select manually. Skore structures the evaluation around your estimator and task type, automatically.

StandaloneLocal-first

Start locally, add the hub when you want

Use Skore for evaluation and experiment storage on disk, without running your own tracking infrastructure. When your team needs a hosted UI and shared storage, connect to Skore Hub and sync what you already produced locally.

You did not know it but we are actually pretty good friends. Join us and find out.

Hands-on support

Need help?
We have got hands-on support.

Probabl offers Forward Deployed Engineering engagements for teams rebuilding their ML evaluation workflow. If you want expert guidance on getting Skore integrated into your current stack, we can help.