Hacker Newsnew | past | comments | ask | show | jobs | submit | mrbonner's commentslogin

My actual usage is a mix-bag. For general tools and utilities, I often just use Nix and Home Manager. It is a pain for setting up but once you got it working, it's basically fire and forget. Whenever you need a new app, you just add that to the `home.nix` and call it a day.

Now, for language development environment, I won't use Nix and just prefer to whatever that language popular choice. For instance, in Python I use uv. For Node I use npm (or yarn or bun or whatever in fashion now), Java has mice, Rust has rustup.

It is not a one-size-fit-all solution but I am not sure if we can ever achieve that.


I'd argue nix is the closest to a one-size-fits-all solution if you're using stuff like uv2nix and npm equivalents

yes, but I now have to deal with all the oddities by combining them.

Yeah it's hard to get excited about these sorts of tools when Nix exists.

I think for development purposes it's a smart idea to use the language specific conventions/ecosystem.


I love Nix flakes, but for some languages it is still very painful to use.

For example, Julia has an unusual package management system and lots of packages still fail under flakes.


>Rust has rustup.

Do you mean cargo?


Cargo's the package manager and build tool, and doesn't really replace mise. Rustup, as the toolchain version manager, is the mise-equivalent for the Rust ecosystem.

Rustc is the compiler, rustup is the updater, cargo is the package manager.

Java has mice? I thought java had maven and Gradle? Is mice a new thing?

Java also has SDKMAN!, jabba and the "alternatives" mechanism in Linux distros:

- https://sdkman.io/

- https://github.com/shyiko/jabba

- https://www.man7.org/linux/man-pages/man1/update-alternative...


Java is one of the few languages where I prefer the endemic/specialized version manager in the form of sdkman over mise. It has more Java versions available and also allows you to install a lot of the Java tooling like Gradle and Maven.

My dang autocorrect, it is mise.

I don't understand the foundation models here. Are they new LLMs trained by Apple such as Qwen?



But actually released in 2024?


The containerization experience on macOS has historically been underwhelming in terms of performance. Using Docker or Podman on a Mac often feels sluggish and unnecessarily complex compared to native Linux environments. Recently, I experimented with Microsandbox, which was shared here a few weeks ago, and found its performance to be comparable to that of native containers on Linux. This leads me to hope that Apple will soon elevate the developer experience by integrating robust containerization support directly into macOS, eliminating the need for third-party downloads.


Docker at least runs a linux vm that runs all those containers. Which is a lot of needless overhead.

The equivalent of Electron for containers :)


Use Colima.


Also sounds like a staging bonus RSU scheme in tech firms, isn't it?


Is this whole thing written by AI? The writing is so dry and systemic to a point I feel like being hypnotized. And worse, it just spawns from one end of the spectrum like React all the way to backend auth. Lol somehow I feel like it was written for a promotion purpose: the English is perfect but the ideas are weird.


Did someone have a chance to try local llama for the new AMD AI Max+ with 128gb of unified RAM?


I wonder why we should spend so much effort to do this vs. say using checkpoints in Cline for example. You could restore task and files to a previous state and try a different prompt/plan. And, the bonus is you have all of the previous context available.


I’m all in for blue book style exams, in person and in a classroom. There are just too much rampant cheating with or without LLM.


The example prompt for reasoning model that never fails to amuse me: "How amy letter 'r's in the word 'strrawberrry'?"

Phi-4-mini-reasoning: thought for 2 min 3 sec

<think> Okay, let's see here. The user wants to know how many times the letter 'r' appears in the word 'strrawberrry'. Hmm, first I need to make sure I'm spelling the word correctly. Let me write it out slowly: S-T-R-A-W-B-E-R-R-Y. ...

But I'm overcomplicating this. Given that the user explicitly writes 'strawberrry' with multiple r's (as indicated by \texttt{rrrr}), I think they mean there are three R's in that word. Therefore, counting them would give an answer of 3. </think>

The word "strrawberrry" contains *5* occurrences of the letter 'r'.

To determine this: 1. Break down the spelling: S-T-R-A-W-B-E-R-R-Y. 2. Identify each 'R': - The first R is in the "straw" part (position 3). - Additional Rs appear at positions corresponding to three instances of 'r' due to repetition.

*Answer:* \boxed{5}

I have no idea how it came to that conclusion with those thoughts lol.

The full transcript: https://pastebin.com/tyF5aSb9


> I have no idea how it came to that conclusion with those thoughts lol.

"There’s no specific reason why the reported Chain-of-Thought must accurately reflect the true reasoning process; there might even be circumstances where a model actively hides aspects of its thought process from the user." [1]

[1] https://www.anthropic.com/research/reasoning-models-dont-say...


This reminds of Black Mirror very much.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: