Hacker News new | past | comments | ask | show | jobs | submit login

I highly, highly recommend uv. It solves & installs dependencies incredibly fast, and the CLI is very intuitive once you've memorized a couple commands. It handles monorepos well with the "workspaces" concept, it can replace pipx with "uv tool install," handle building & publishing, and the docker image is great, you just add a FROM line to the top and copy the bin from /uv.

I've used 'em all, pip + virtualenv, conda (and all its variants), Poetry, PDM (my personal favorite before switching to uv). Uv handles everything I need in a way that makes it so I don't have to reach for other tools, or really even think about what uv is doing. It just works, and it works great.

I even use it for small scripts. You can run "uv init --script <script_name.py>" and then "uv add package1 package2 package3 --script <script_name.py>". This adds an oddly formatted comment to the top of the script and instructs uv which packages to install when you run it. The first time you run "uv run <script_name.py>," uv installs everything you need and executes the script. Subsequent executions use the cached dependencies so it starts immediately.

If you're going to ask me to pitch you on why it's better than your current preference, I'm not going to do that. Uv is very easy to install & test, I really recommend giving it a try on your next script or pet project!






The script thing is great. By the way those 'oddly formatted' comments at the top are not a uv thing, it's a new official Python metadata format, specifically designed to make it possible for 3rd party tools like uv to figure out and install relevant packages.

And in case it wasn't clear to readers of your comment, uv run script.py creates an ephemeral venv and runs your script in that, so you don't pollute your system env or whatever env you happen to be in.


I generally agree but one thing I find very frustrating (i.e. have not figured out yet) is how deal with extras well, particularly with pytorch. Some of my machines have GPU, some don't and things like "uv add" end up uninstalling everything and installing the opposite forcing a resync with the appropriate --extra tag. The examples in the docs do things like CPU on windows and GPU on Linux but all my boxes are linux. There has to be a way to tell it that "hey I want --extra GPU" always on this box. But I haven't figured it out yet.

Getting the right version of PyTorch installed to have the correct kind of acceleration on each different platform you support has been a long-standing headache across many Python dependency management tools, not just uv. For example, here's the bug in poetry regarding this issue: https://github.com/python-poetry/poetry/issues/6409

As I understand it, recent versions of PyTorch have made this process somewhat easier, so maybe it's worth another try.


uv actually handles thr issues described there very well (uv docs have have a page showing a few ways to do it). The issue for me is uv has massive amnesia about which one was selected and you end up trashing packages because of that. uv is very fast at thrashing though so it's not as bad as if poetry were thrashing.

I end up going to the torch website and they have a nice little UI I can click what I have and it gives me the pip line to use.

That's fine if you are just trying to get it running on your machine specifically, but the problems come in when you want to support multiple different combinations of OS and compute platform in your project.

On nvidia jetson systems, I always end up compiling torchvision, while torch always comes as a wheel. It seems so random.

It sounds like you’re just looking for dependency groups? uv supports adding custom groups (and comes with syntactic sugar for a development group

It is... but basically it need to remember which groups are sync'd. For example if you use an extra, you have to keep track of it constantly because sync thrashes around between states all the time unless you play close and tedious attention. At least I haven't figured out how to make it remember which extras are "active".

    uv sync --extra gpu
    uv add matplotlib # the sync this runs undoes the --extra gpu
    uv sync # oops also undoes all the --extra
What you have to do to avoid this is to remember to use --no-sync all the time and then meticulously manually sync while remembering all the extras that I do actually currently want:

    uv sync --extra gpu --extra foo --extra bar
    uv add --no-sync matplotlib
    uv sync --extra gpu --extra foo --extra bar
It's just so... tedious and kludgy. It needs an "extras.lock" or "sync.lock" or something. I would love it if someone tells me I'm wrong and missing something obvious in the docs.

To make the change in your environment:

1. Create or edit the UV configuration file in one of these locations:

- `~/.config/uv/config.toml` (Linux/macOS)

- `%APPDATA%\uv\config.toml` (Windows)

2. Add a section for default groups to sync:

```toml

[sync]

include-groups = ["dev", "test", "docs"] # Replace with your desired group names

```

Alternatively, you can do something similar in pyproject.toml if you want to apply this to the repo:

```toml

[tool.uv]

sync.include-groups = ["dev", "test", "docs"] # Replace with your desired group names

```


Thank you! That's good to know. Unfortunately it doesn't seem to work for "extras". There may be some target other than sync.include-groups but I haven't found it yet.

What I am struggling with is what you get after following the Configuring Accelerators With Optional Dependencies example:

https://docs.astral.sh/uv/guides/integration/pytorch/#config...

Part of what that does is set up rules that prevent simultaneously installing cpu and gpu versions (which isn't possible). If you use the optional dependencies example pyproject.toml then this is what happens:

    $ uv sync --extra cpu --extra cu124
    Using CPython 3.12.7
    Creating virtual environment at: .venv
    Resolved 32 packages in 1.65s
    error: Extras `cpu` and `cu124` are incompatible with the declared conflicts: {`project[cpu]`, `project[cu124]`}
And if you remove the declared conflict, then uv ends up with two incompatible sources to install the same packages from

    uv sync --extra cpu --extra cu124
    error: Requirements contain conflicting indexes for package `torch` in all marker environments:
    - https://download.pytorch.org/whl/cpu
    - https://download.pytorch.org/whl/cu124
After your comment I initially thought that perhaps the extras might be rewritten as group dependencies somehow to use the ~/.config/uv/config.toml but according to the docs group dependencies are not allowed to have conflicts with each other and must be installable simultaneously (makes sense since there is an --all-groups flag). That is you must be able to install all group dependencies simultaneously.

You can control dependencies per platform

https://docs.astral.sh/uv/concepts/projects/dependencies/#pl...

Not sure if it's as granular as you might need




I haven't tried it yet but that looks like exactly what I've been missing.

This happened to me too, that is why I stopped using it for ML related projects and stuck to good old venv. For other Python projects I can see it being very useful however.

I'm not sure if I got your issue, but I can do platform-dependent `index` `pytorch` installation using the following snippet in `pyproject.toml` and `uv sync` just handles it accordingly.

[tool.uv.sources] torch = [{ index = "pytorch-cu124", marker = "sys_platform == 'win32'" }]


Some Windows machines have compatible GPUs while others don't, so this doesn't necessarily help. What is really required is querying the OS for what type of compute unit it has and then installing the right version of an ML library, but I'm not sure that will be done.

Even without query, just setting an environment variable or having remember which extras are already applied to the already synced .venv some way.

i use uv+torch+cuda on linux just fine,never used the extra flag, i wonder what's the problem here?

Getting something that works out of the box on just your computer is normally fine. Getting something that works out of the box on many different computers with many different OS and hardware configurations is much much harder.

I didn't know that UV would now edit the script for you. That is just icing on the cake!

For the curious, the format is codified here: https://peps.python.org/pep-0723/


The install speed alone makes it worthwhile for me. It went from minutes to seconds.

I was working on a Raspberry Pi at a hackathon, and pip install was eating several minutes at a time.

Tried uv for the first time and it was down to seconds.


Why would you be redoing your venv more than once?

Once rebuilding your venv takes negligible time, it opens up for all kinds of new ways to develop. For example I now always run my tests in a clean environment, just to make sure I haven't added anything that only happens to work in my dev venv.

That's smart. Oh, you used `pip install` to fix a missing import, but forgot to add it to pyproject.toml? You'll find out quickly.

It has nothing to do with redoing venv: some package installs were just taking multiple minutes.

I cancelled one at 4 minutes before switching to uv and having it finish in a few seconds


Can confirm this is all true. I used to be the "why should I switch" guy. The productivity improvement from not context switching while pip installs a requirements file is completely worth it.

That scripting trick is awesome! One of the really nice things about Elixir and its dependency manager is that you can just write Mix.install(…) in your script and it’ll fetch those dependencies for you, with the same caching you mentioned too.

Does uv work with Jupyter notebooks too? When I used it a while ago dependencies were really annoying compared to Livebook with that Mix.install support.


uv offers another useful feature for inline dependencies, which is the exclude-newer field[1]. It improves reproducibility by excluding packages released after a specified date during dependency resolution.

I once investigated whether this feature could be integrated into Mix as well, but it wasn't possible since hex.pm doesn't provide release timestamps for packages.

> Does uv work with Jupyter notebooks too?

Yes![2]

[1] https://docs.astral.sh/uv/guides/scripts/#improving-reproduc... [2] https://docs.astral.sh/uv/guides/integration/jupyter/


As a person who don’t work often on python code but occasionally need to run server or tool I find UV blessing. Before that I would beg people to help me just not to figure out what combination of obscure python tools I need. Now doing “uv run server.py” usually works.

I happened to use uv recently for a pet project, and I totally agree with you. It's really really good. I couldn't believe its dependency resolution and pulling can be so fast. Imho, it's the python package manager (I don't know the most suitable name to categorize it) done right, everything just works, the correct way.

uv is great and we’re switching over from conda for some projects. The resolver is lightning fast and the toml support is good.

Having said that, there are 2 areas where we still need conda:

- uv doesn’t handle non-python wheels, so if you need to use something like mkl, no luck

- uv assumes that you want to use one env per project. However with complex projects you may need to use a different env with different branches of your code base. Conda makes this easy - just activate the conda env you want — all of your envs can be stored in some central location outside your projects — and run your code. Uv wants to use the project toml file and stores the packages in .venv by default (which you don’t want to commit but then need different versions of). Yes you can store your project venv elsewhere with an env var but that’s not a practical solution. There needs to be support for multiple .toml files where the location of the env can be specified inside the toml file (not in an env var).


You may want to checkout uv’s workspaces - they’re very handy for large mono repos.

Thanks. I looked at that but I believe it solves a different problem.

> It solves & installs dependencies incredibly fast

If you are lucky, and you don't have to build them, because the exceptionally gifted person who packaged them didn't know how to distribute them and the bright minds running PyPI.org allowed that garbage to be uploaded and made it so pip would install that garbage by default.

> can replace pipx with "uv tool install,"

That's a stupid idea. Nobody needed pipx in the first place... The band-aid that was applied some years ago is now cast in stone...

The whole idea of Python tools trying to replace virtual environment, but doing it slightly better is moronic. The virtual environments is the band-aid. It needs to go. The Python developers need to be pressured into removing this garbage, and instead working on having program manifests or something similar. Python has virtual environments due to incompetence of its authors and unwillingness to make things right, once that incompetence was discovered.

----

NB. As it stands today, if you want to make your project work well, you shouldn't use any tools that install packages by solving dependencies and downloading them from PyPI. It's not the function of the tool doing that, it's the bad design of the index.

The reasonable thing to do is to install the packages (for applications) you need during development, figure out what you actually need, and then store the part you need for your package to work locally. Only repeat this process when you feel the need to upgrade.

If you need packages for libraries, then you need a way to install various permutations within allowed versions: no tool for package installation today knows how to do it. So, you might as well not use any anyways.

But, the ironic part is that nobody in Python community does it right. And that's why there are tons of incompatibilities, and the numbers increase dramatically when projects age even slightly.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: