I generally agree but one thing I find very frustrating (i.e. have not figured out yet) is how deal with extras well, particularly with pytorch. Some of my machines have GPU, some don't and things like "uv add" end up uninstalling everything and installing the opposite forcing a resync with the appropriate --extra tag. The examples in the docs do things like CPU on windows and GPU on Linux but all my boxes are linux. There has to be a way to tell it that "hey I want --extra GPU" always on this box. But I haven't figured it out yet.
Getting the right version of PyTorch installed to have the correct kind of acceleration on each different platform you support has been a long-standing headache across many Python dependency management tools, not just uv. For example, here's the bug in poetry regarding this issue: https://github.com/python-poetry/poetry/issues/6409
As I understand it, recent versions of PyTorch have made this process somewhat easier, so maybe it's worth another try.
uv actually handles thr issues described there very well (uv docs have have a page showing a few ways to do it). The issue for me is uv has massive amnesia about which one was selected and you end up trashing packages because of that. uv is very fast at thrashing though so it's not as bad as if poetry were thrashing.
That's fine if you are just trying to get it running on your machine specifically, but the problems come in when you want to support multiple different combinations of OS and compute platform in your project.
It is... but basically it need to remember which groups are sync'd. For example if you use an extra, you have to keep track of it constantly because sync thrashes around between states all the time unless you play close and tedious attention. At least I haven't figured out how to make it remember which extras are "active".
uv sync --extra gpu
uv add matplotlib # the sync this runs undoes the --extra gpu
uv sync # oops also undoes all the --extra
What you have to do to avoid this is to remember to use --no-sync all the time and then meticulously manually sync while remembering all the extras that I do actually currently want:
uv sync --extra gpu --extra foo --extra bar
uv add --no-sync matplotlib
uv sync --extra gpu --extra foo --extra bar
It's just so... tedious and kludgy. It needs an "extras.lock" or "sync.lock" or something. I would love it if someone tells me I'm wrong and missing something obvious in the docs.
Thank you! That's good to know. Unfortunately it doesn't seem to work for "extras". There may be some target other than sync.include-groups but I haven't found it yet.
What I am struggling with is what you get after following the Configuring Accelerators With Optional Dependencies example:
Part of what that does is set up rules that prevent simultaneously installing cpu and gpu versions (which isn't possible). If you use the optional dependencies example pyproject.toml then this is what happens:
$ uv sync --extra cpu --extra cu124
Using CPython 3.12.7
Creating virtual environment at: .venv
Resolved 32 packages in 1.65s
error: Extras `cpu` and `cu124` are incompatible with the declared conflicts: {`project[cpu]`, `project[cu124]`}
And if you remove the declared conflict, then uv ends up with two incompatible sources to install the same packages from
uv sync --extra cpu --extra cu124
error: Requirements contain conflicting indexes for package `torch` in all marker environments:
- https://download.pytorch.org/whl/cpu
- https://download.pytorch.org/whl/cu124
After your comment I initially thought that perhaps the extras might be rewritten as group dependencies somehow to use the ~/.config/uv/config.toml but according to the docs group dependencies are not allowed to have conflicts with each other and must be installable simultaneously (makes sense since there is an --all-groups flag). That is you must be able to install all group dependencies simultaneously.
This happened to me too, that is why I stopped using it for ML related projects and stuck to good old venv. For other Python projects I can see it being very useful however.
I'm not sure if I got your issue, but I can do platform-dependent `index` `pytorch` installation using the following snippet in `pyproject.toml` and `uv sync` just handles it accordingly.
Some Windows machines have compatible GPUs while others don't, so this doesn't necessarily help. What is really required is querying the OS for what type of compute unit it has and then installing the right version of an ML library, but I'm not sure that will be done.
Getting something that works out of the box on just your computer is normally fine. Getting something that works out of the box on many different computers with many different OS and hardware configurations is much much harder.