Hacker Newsnew | past | comments | ask | show | jobs | submit | more ComputerGuru's commentslogin

Devil’s advocate: How do they map that data to a user when you are buying through a maze of resellers?

they dont, they try against all the keys, there are at most a few billion of them

see Dual_EC_DRBG


If Python is your baseline, LuaJit is certainly going to be overkill. But to answer your question: when and where latency matters. Web apps, text editors, etc.


The greybeards would do it with imagemagick, vips, or even ffmpeg. Gives you full control over the quality and you can script it, parallelize it, and more.


Not exactly the topic of discussion but also not not on topic: just wanted to sing praise for chrony which has performed better than the traditional os-native NTP clients in our testing on a myriad of real and virtualized hardware.


Chrony is the default already in some distros (RHEL and SLES that I know of), probably for this very reason.


Pedantically, a monotonic function need not have a constant first derivative. To take it further, in mathematics it is accepted for a monatomic function to have a countable number of discontinuities, but of course in the context of a digital clock that only increments in discrete steps, that’s of little bearing.

But that’s all besides the point since most sane time sync clients (regardless of protocol) generally handle small deviations (i.e. normal cases) by speeding up or slowing down the system clock, not jumping it (forward or backward).


SHA1 as a MAC for AES encryption is different from SHA-1 as a hash algorithm and remains secure, though there are of course better alternatives.


That’s what happened in TFA.


You're right. Let me correct myself: a hobbyist-friendly hardware solution. Dolphin's PCIe switches cost more than 8 RTX 3090 on a Threadripper machine.


Jeff forgot to mention that in his post!


Use it over api.


They’re not the same, there are (at least) two different tunes per 5.x

For each you can use it as “instant” supposedly without thinking (though these are all exclusively reasoning models) or specify a reasoning amount (low, medium, high, and now xhigh - though if you do g specify it defaults to none) OR you can use the -chat version which is also “no thinking” but in practice performs markedly differently from the regular version with thinking off (not more or less intelligent but has a different style and answering method).


Or maybe Google knows most people search inane, obvious things?


Or more likely Google couldn't give a rat's arse whether those AI summaries are good or not (except to the degree that people don't flee it), and what it cares is that they keep users with Google itself, instead of clicking of to other sources.

After all it's the same search engine team that didn't care about its search results - it's main draw - activey going shit for over a decade.


Google AI Overview a lot of times write wrong about obvious things so... lol

They probably use old Flash Lite model, something super small, and just summarize the search...


Those summaries would be far more expensive to generate than the searches themselves so they're probably caching the top 100k most common or something, maybe even pre-caching it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: