For decades now, we've had to deal with articles like this one. People who know just enough to sound credible mislead those who known even less into mutilating their systems in the name of "optimization". This genre is a menace.
Much harm has arisen out of the superstitious fear of 100% CPU use. Why wouldn't you want a compute bound task to use all available compute? It'll finish faster that way. We keep the system responsive with priorities and interactivity-aware thresholds, not by making a scary-looking but innocuous number go down in an ultimately counterproductive way.
The article's naive treatment of memory is also telling. The "Memory" column in the task manager is RSS. It counts shared memory multiple times, once for each process. You literally can't say the 5MB "adds up". It quite literally is not amenable to the arithmetic operation of addition in a way that produces a physically meaningful result. It is absolute nonsense, and when you make optimization decisions based on garbage input, you produce garbage output.
It's hard to blame Apple for locking down the OS core like this. People try to "optimize" Windows all the time by disabling load-bearing services that cost almost nothing just so "number go down" and they get that fuzzy feeling they've optimized their computer. Then the rest of the world has to deal with bug reports in which some API mysteriously doesn't work because the user broke his own system but blames you anyway.
> Much harm has arisen out of the superstitious fear of 100% CPU use. Why wouldn't you want a compute bound task to use all available compute? It'll finish faster that way.
Because it hurts the speed/responsiveness of stuff you actually care about. It also has other negative side effects like fan noise and temperature, which with bad insulation in MacBook it can even physically burn.
Pretty obvious stuff if you don't discard issues as superstitions
> It'll finish faster that way.
The usefulness of which might be none: some background maintenance process finishes in 5 seconds that I don't notice vs in 1 seconds while turning the fans on or making my app slower
> We keep the system responsive with priorities and interactivity-aware thresholds,
Only in your fantasy, in reality you fail at that, so "superstitions" arise
> It's hard to blame Apple for locking down the OS core like this.
Of course, if you ignore real issues with bloat, and only notice the mistakes, but that's a self-inflicted perspective
> by disabling load-bearing services
The article mentions that there is not even basic information on what services do, it's similar in Windows, so maybe the proper way out is teach people and also debloat the OS proactively to give them less of an incentive to do it themselves?
The right way to make the system stick to thermal constraints is to modulate clock speed and cooling, not randomly throttle workloads so some task manager reports they're running inefficiently
Perhaps it did a while ago. Now, https://www.bazhenov.me/posts/activity-monitor-anatomy/ is a good read. Thanks. It's much better than RSS, although I'm at still not sure that I like the inclusion of private compressed memory. In any case, thanks for the correction.
One of the ways both macOS and iOS get good battery life is burst-y CPU loads to return the CPU to idle as quickly as possible. They also both run background tasks like Spotlight on the e-cores whenever possible. So some process maxing out an e-core is using a lot less power than one maxing out a p-core. Background processes maxing out a core occasionally is not as much of a problem as a lot of people seem to assume.
We don't have lot of GPUs available right now, but it is not crazy hard to get it running on our MI300x. Depending on your quant, you probably want a 4x.
ssh admin.hotaisle.app
Yes, this should be made easier to just get a VM with it pre-installed. Working on that.
I find it hard to trust post training quantizations. Why don't they run benchmarks to see the degradation in performance? It sketches me out because it should be the easiest thing to automatically run a suite of benchmarks
One thing to consider is that this version is a new architecture, so it’ll take time for Llama CPP to get updated. Similar to how it was with Qwen Next.
There are a bunch of 4bit quants in the GGUF link and the 0xSero has some smaller stuff too. Might still be too big and you'll need to ungpu poor yourself.
Except this is GLM 4.7 Flash which has 32B total params, 3B active. It should fit with a decent context window of 40k or so in 20GB of ram at 4b weights quantization and you can save even more by quantizing the activations and KV cache to 8bit.
yes, but the parrent link was to the big glm 4.7 that had a bunch of ggufs, the new one at the point of posting did not, nor does it now. im waiting for unsloth guys for the 4.7 flash
Outside transfer switch and a 10-20kw portable generator is like $4-5k. It requires manual switching but it works for us in our hurricane-prone region. Helped with last years 1 in a 100 year winter storm in our southern region.
Battery/solar doesn’t make sense in my opinion. Too many years to break even like this parent comment said and by the time you break even at 10 years, your system either is too inefficient or needs replacing. At least with the portable generator, you can move it with you to a new home and use it for other things like camping or RVing.
Context: I’m in the Netherlands. With taxes, power is around 25cent/kWh for me. For reference: Amsterdam is around a latitude of 52N, which is north enough that it only hits Alaska, not the US mainland.
I installed 2800Wp solar for about €2800 ($3000, payback in: 4-5 years), and a 5kWh battery for €1200 ($1300) all in. The battery has an expected payback time of just over 5 years, and I have some backup power if I need it.
I’m pretty sure about the battery payback, because I have a few years of per second consumption data in clickhouse and (very conservatively) simulated the battery. A few years ago any business case on storage was completely impossible, and now suddenly we’re here.
I could totally see this happen for the US as prices improve further, even if it’s not feasible today.
Is it priceless? I literally wouldn't pay more than $200 to have electricity for a day while the whole neighborhood doesn't. Anything more and I'd prefer to just keep the money in my pocket to be honest.
In my country I've never had to deal with more than 15 minutes, twice in my life. In other countries its sometimes been a day but really I just go on with my life.
Whats funny about that -- is you assume thats the case - but a lot of solar isn't installed to be backup power. With Storage yes, but straight up solar -> no.
99% of systems are grid tie, so unless you’re spending another $7k for an ATS and associated infrastructure or you’re 100% off grid, your power still goes off.
"An ATS (Automatic Transfer Switch) for solar is a crucial device that seamlessly switches your home's power between the utility grid, your solar panels/battery bank...
And I should clarify that you technically can get away with a less expensive interlock system, but you're still paying a few thousand dollars to have your panel replaced (unless you feel comfortable doing that sort of electrical work yourself).
Making a system non-grid-tie is comparatively expensive, that's why grid tie is so common. People think you add solar + batteries and you're ready for doomsday - not quite.
reply