Are people seeing it work well in GPU/pydata land and creating multiplatform docker images?
In the data science world, conda/mamba was needed because of this kind of thing, but a lot of room for improvement. We basically want lockfile, incremental+fast builds, and multi-arch for these tricky deps.
It works transparently. The lock file is cross-platform by default. When using pytorch, it automatically installs with MPS support on macOS and CUDA on Linux; everything just works. I can't speak for Windows, though.
I think the comparison for data work is more on conda, not poetry. afaict poetry is more about the "easier" case of pure-python, and not native areas like prebuilt platform-dependent binaries. Maybe poetry got better, but I typically see it more like a nice-to-have for local dev and rounding out the build, but not that recommended install flow for natively-aligned builds.
So still curious with folks navigating the 'harder' typical case of the pydata world, getting an improved option here is exciting!
That's fair. I guess when you see people champion poetry (less so lately) you hope it works as well as pip/conda despite the complexities of pytorch in particular. Finding that the community in particular simply doesn't use that library has a shock of sorts - like this package manager is great, but "your type ain't welcome".
In any case I believe uv is trying to be _the_ solution and Id be pretty surprised if your libs weren't well supported, or on the roadmap at least.
In the data science world, conda/mamba was needed because of this kind of thing, but a lot of room for improvement. We basically want lockfile, incremental+fast builds, and multi-arch for these tricky deps.