If Python is your baseline, LuaJit is certainly going to be overkill. But to answer your question: when and where latency matters. Web apps, text editors, etc.
The greybeards would do it with imagemagick, vips, or even ffmpeg. Gives you full control over the quality and you can script it, parallelize it, and more.
Not exactly the topic of discussion but also not not on topic: just wanted to sing praise for chrony which has performed better than the traditional os-native NTP clients in our testing on a myriad of real and virtualized hardware.
Pedantically, a monotonic function need not have a constant first derivative. To take it further, in mathematics it is accepted for a monatomic function to have a countable number of discontinuities, but of course in the context of a digital clock that only increments in discrete steps, that’s of little bearing.
But that’s all besides the point since most sane time sync clients (regardless of protocol) generally handle small deviations (i.e. normal cases) by speeding up or slowing down the system clock, not jumping it (forward or backward).
You're right. Let me correct myself: a hobbyist-friendly hardware solution. Dolphin's PCIe switches cost more than 8 RTX 3090 on a Threadripper machine.
They’re not the same, there are (at least) two different tunes per 5.x
For each you can use it as “instant” supposedly without thinking (though these are all exclusively reasoning models) or specify a reasoning amount (low, medium, high, and now xhigh - though if you do g specify it defaults to none) OR you can use the -chat version which is also “no thinking” but in practice performs markedly differently from the regular version with thinking off (not more or less intelligent but has a different style and answering method).
Or more likely Google couldn't give a rat's arse whether those AI summaries are good or not (except to the degree that people don't flee it), and what it cares is that they keep users with Google itself, instead of clicking of to other sources.
After all it's the same search engine team that didn't care about its search results - it's main draw - activey going shit for over a decade.
Those summaries would be far more expensive to generate than the searches themselves so they're probably caching the top 100k most common or something, maybe even pre-caching it.
reply