(I don’t see what this being reported during the Christmas holidays has to do with not revealing the disclosure and patch timeline, a “note that delays should be attributed to Christmas” would have sufficed.)
Do browsers use a custom dictionary for zstd (I don’t think so since I can precompress zstd content server-side)?
Brotli was designed for html compression so despite/while being a relatively inferior algorithm, its stock dictionary is all html/css/js-trained/optimized. Chrome/Blink recently added support for seeing content compressed with a bespoke dictionary, but that only works for massive sites that have a heavily skewed new/returning visit ratio (because of the cost of shipping both the compressed content and the dictionary).
Long story short, I could see br being better than zstd for basic web purposes.
Minor bug/suggestion: right-aligned text inputs (eg the username input on the “me” page) aren’t ideal since they are often obscured by input helpers (autocomplete or form fill helper icons).
Unfortunately even the videos that do contain helpful imagery are still dominated by huge sections of low entropy.
For example, one of the most useful applications of video over text is appliance or automotive repair, but the ideal format would be an article interspersed with short video sections, not a video with a talking head and some ~static shaky cam taking up most of the time as the individual drones on about mostly unrelated topics or unimportant details yet you can’t skip past it in case there is something actually pertinent covered in that time.
Ay, there's the rub. Professional video makes tend to be pushed into making videos for a more general audience, and niche topics are left to first-timers who haven't developed video-making skills and (tend to) go on and on.
I've produced a few videos, and I was shocked at how difficult it was to be clear. I have the same problem with writing, but at least it's restricted in a way video making isn't. There's so many ways to make a video about something, and most of them are wrong!
I’m still rocking a plasma tv which sidesteps the matter altogether :)
Best tv tech to date, though OLED improvements in the past year mean we might see good panels hitting the market in a few years. The race to produce the brightest panels (and putting them on display for comparison and testing in brightly lit electronics stores in environments that couldn’t be further from the actual viewing experience) resulted in a bunch of mass market crap.
Took down my Pioneer Kuro a couple of weeks ago. OLED is so good now.
Agree with the in store crap and all the processing that’s turned on for the TVs on display. But brightness is useful - can help combat ambient light, and HDR can look amazing.
The biggest selling point to using Postgres over qdrant or whatever is that you can put all the data in the same db and use joins and ctes, foreign keys and other constraints, lower latency, get rid of effectively n+1 cases, and ensure data integrity.
I generally agree that one database instance is ideal, but there are other reasons why Postgres everywhere is advantageous, even across multiple instances:
- Expertise: it's just SQL for the most part
- Ecosystem: same ORM, same connection pooler
- Portability: all major clouds have managed Postgres
I'd gladly take multiple Postgres instances even if I lose cross-database joins.
Yep. If performance becomes a concern, but we still want to exploit joins etc, it's easy to set up replicas and "shard" read only use cases across replicas.
Non-ruby dev here. Can someone explain the side exit thing for me?
> This meant that the code we were running had to continue to have the same preconditions (expected types, no method redefinitions, etc) or the JIT would safely abort. Now, we can side-exit and use this feature liberally.
> For example, we gracefully handle the phase transition from integer to string; a guard instruction fails and transfers control to the interpreter.
> (example showing add of two strings omitted)
What is the difference between the JIT safely aborting and the JIT returning control to the interpreter? Or does the JIT abort mean the entire app aborts (i.e. I presumed JIT aborting means continuing on the interpreter anyway?)
(Also, why would you want the code that uses the incorrect types to succeed? Isn’t abort of the whole unit of execution the right answer here, anyway?)
Dynamic languages will allow a range of types through functions. JITs add tracing and attempt to specialize the functions based on the observed types at runtime. It is possible that later on, the function is called with different types than what the JIT observed and compiled code for. To handle this, JITs will have stubs and guards. You check the observed type at runtime before calling the JITted code. If the type does not match, you would call a stub to generate the correct machine code, or you could just call into the interpreter slow path.
An example might be the plus operator. Many languages will allow integers, floats, strings and more on either side of the operator. The JIT likely will see mostly integers and optimize the functions call for integer math. If later you call the plus operator with two Point classes, then you would fall back to the interpreter.
In this case, we used to abort (i.e. abort(); intentionally crash the entire process) but now we jump into the interpreter to handle the dynamic behavior.
If someone writes dynamic ruby code to add two objects, it should succeed in both integer and string cases. The JIT just wants to optimize whatever the common case is.
I’m assuming that when you talk about crashing processes as the status quo you’re referring to earlier versions of zjit rather than current Ruby on yjit? Because I’ve never seen a Ruby process crash because + was called with different arguments.
I guess I’m confused why an actual add instruction is emitted rather than whatever overloaded operation takes place when the + symbol (or overloaded add vtable entry) is called (like it would in other OOP languages).
If all you're doing is summing small integers---frequently the case---it's much preferable to optimize that to be fast and then skip the very dynamic method lookup (the slower, less common case)
The obvious solution is an ssh-agent integration that caches the touch-derived key for up to N hours or until the workstation is locked (as a proxy for user-is-away event), AND integrates with secure desktop (à la UAC) to securely show a software-only confirmation prompt/dialog for subsequent pushes within the timeout window.
(Tbh, a secure-desktop-integrated confirmation dialog would solve most issues that needed a hardware key to begin with.)
GitHub dropped http authentication so this only works for public repos (not that the UX or security of http auth for git is nice).
Can git be configured to use different keys for push and pull? (You can obviously use different upstreams, but thats not as elegant.) Most git servers let you specify read vs read-write privileges (aka “deployment keys”) so you could use one key to pull updates that doesn’t need touch and another key to push (which does).
GitHub did not drop http auth. They prefer you use http instead of ssh.
What they dropped was auth using your account name and password. You need to use a token as your password or use an extra tool like their cli client to setup auth (but it sucks if you have multiple accounts).
(I don’t see what this being reported during the Christmas holidays has to do with not revealing the disclosure and patch timeline, a “note that delays should be attributed to Christmas” would have sufficed.)
reply