Hacker News new | past | comments | ask | show | jobs | submit login

Are people seeing it work well in GPU/pydata land and creating multiplatform docker images?

In the data science world, conda/mamba was needed because of this kind of thing, but a lot of room for improvement. We basically want lockfile, incremental+fast builds, and multi-arch for these tricky deps.






It works transparently. The lock file is cross-platform by default. When using pytorch, it automatically installs with MPS support on macOS and CUDA on Linux; everything just works. I can't speak for Windows, though.

Yes because the pypi cupy/cudann packages now work seamlessly with jax. Until not long ago we had to use the conda packages.

Works better than poetry for cuda-versioned pytorch. I don't have overlap with your other domains unfortunately (ML, not data science).

Thanks!

I think the comparison for data work is more on conda, not poetry. afaict poetry is more about the "easier" case of pure-python, and not native areas like prebuilt platform-dependent binaries. Maybe poetry got better, but I typically see it more like a nice-to-have for local dev and rounding out the build, but not that recommended install flow for natively-aligned builds.

So still curious with folks navigating the 'harder' typical case of the pydata world, getting an improved option here is exciting!


That's fair. I guess when you see people champion poetry (less so lately) you hope it works as well as pip/conda despite the complexities of pytorch in particular. Finding that the community in particular simply doesn't use that library has a shock of sorts - like this package manager is great, but "your type ain't welcome".

In any case I believe uv is trying to be _the_ solution and Id be pretty surprised if your libs weren't well supported, or on the roadmap at least.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: