In theory, I'd agree. That said, I do suggest being open minded here.
Sharing this because I got genuine value out of it. Not just a "cool hype demo".
The key here is:
1. Prompt is created using the context from from a long conversation with the CLI.
2. You use the generate prompt as a "v1" and MANUALLY edit it.
3. I'm constantly iterating on my prompt writer.
Seeing and navigating all the configs helped me build intuition around what my macbook can or cannot do, how things are configured, how they work, etc...
I also like that it ships with some cli tools, including an openai compatible server. It’s great to be able to take a model that’s loaded and open up an endpoint to it for running local scripts.
You can get a quick feel for how it works via the chat interface and then extend it programmatically.
We sped up fMRI analysis using distributed computing (MapReduce) and GPUs back in 2014.
Funny how nothing has changes.