Yeah that's just flat out not correct. If you're writing through a file system or the buffer cache and you don't fsync, there is no guarantee your data will still be there after, say, a power loss or a system panic. There's no guarantee it's even been passed to the device at all when an asynchronous write returns.
However, in our experiments (including Figure 9), we bypass the page cache and issue writes using O_DIRECT to the block device. In this configuration, write completion reflects device-level persistence. For consumer SSDs without PLP, completions do not imply durability and a flush is still required.
> "When an SSD has Power-Loss Protection (PLP) -- for example, a supercapacitor that backs the write-back cache -- then the device's internal write-back cache contents are guaranteed durable even if power is lost. Because of this guarantee, the storage controller does not need to flush the cache to media in a strict ordering or slow way just to make data persistent."
(Won et al., FAST 2018)
https://www.usenix.org/system/files/conference/fast18/fast18...
We will make this more explicit in the next revision. Thanks.
This is what I do! The API itself is not particularly amazing (the way it handles batch requests as a MIME multipart formatted POST body where each part is itself a HTTP request is particularly obscene).
The underlying data model is kind of OK though: messages are immutable, they get a long lived unique ID that survives changes to labels, etc. There is a history of changes you can grab incrementally. You can download the full message body that you can then parse as mail, and I save each one into a separate file and then index them in SQLite.
I cannot fathom the bleak pessimistic perspective of wanting fixed trains and busses over this. Crying babies, rude people.
American transit sucks and it's not getting better. It's tolerable in cities like NYC, but even so it's a far cry from Asia. If you're not American, please don't project. We'll never have that here. We are not dense enough for it.
The main problems with cars are not about the fuel they run on, or the level of automation. They're the space they occupy per passenger and duration of use, the mass they have to move around, the materials required to build them, tire particulates, and the danger they pose to other traffic participants. None of these are alleviated by the thing-du-jour the car industry presents as a solution to all the problems on any given day. The real issues are all endemic to the concept of a car in the first place.
And, maybe, if more people stopped seeing random encounters with some of their co-humans as just an annoying moment of having to deal with icky other people, we'd make some progress on our loneliness, aggravation and political polarisation problems.
Actually, you don't need to do anything of the sort! Nobody is owed an easy ride to other people's stuff.
Plus, if the magic technology is indeed so incredible, why would we need to do anything differently? Surely it will just be able to consume whatever a human could use themselves without issues.
> Nobody is owed an easy ride to other people's stuff.
If your website doesn't have a relevant profit model or competition then sure. If you run a SaaS business and your customer wants to do some of their own analytics or automation with a model it's going be hard to say no in the future. If you're selling tickets on a website and block robots you'll lose money. etc
If this is something people learn to use in Excel or Google Docs they'll start expecting some way to do so with their company data in your SaaS products, or you better build a chat model with equivalent capabilities. Both would benefit from documentation.
It's not unreasonable to think that "is [software] easy or hard for an LLM agent to consume and manipulate" will become a competitive differentiator for SaaS products, especially enterprise ones.
Maybe, but it sure makes all the hyped claims around LLMs seem like lies. If they're smarter than a Ph.D student why can't they use software designed to be used by high school dropouts?
This is just a Linux ecosystem thing. Other full size operating systems do memory accounting differently, and are able to correctly communicate when more memory is not available.
There are functions on many C allocators that are explicitly for non-trivial allocation scenarios, but what major operating system malloc implementation returns NULL? MSVC’s docs reserve the right to return NULL, but the actual code is not capable of doing so (because it would be a security nightmare).
I hack on various C projects on a linux/musl box, and I'm pretty sure I've seen musl's malloc() return 0, although possibly the only cases where I've triggered that fall into the 'unreasonably huge' category, where a typo made my enormous request fail some sanity check before even trying to allocate.
> There are functions on many C allocators that are explicitly for non-trivial allocation scenarios, but what major operating system malloc implementation returns NULL?
Solaris (and FreeBSD?) have overcommitting disabled by default.
Solaris, AIX, *BSD and others do not offer overcommit, which is a Linux construct, and they all require enough swap space to be available. Installation manuals provide explicit guidelines on the swap partition sizing, with the rule of thumb being «at least double the RAM size», but almost always more in practice.
That is the conservative design used by several traditional UNIX systems for anonymous memory and MAP_PRIVATE mappings: the kernel accounts for, and may reserve, enough swap to back the potential private pages up front. Tools and docs in the Solaris and BSD family talk explicitly in those terms. An easy way to test it out in a BSD would be disabling the swap partition and trying to launch a large process – it will get killed at startup, and it is not possible to modify this behaviour.
Linux’s default policy is the opposite end of that spectrum: optimistic memory allocation, where allocations and private mappings can succeed without guaranteeing backing store (i.e. swap), with failure deferred to fault time and handled by the OOM killer – that is what Linux calls overcommit.
As opposed to all the regular kinds of shitty behaviour landlords inflict on their tenants already? I feel like "because the money people will continue to misbehave" is absolutely not a reason to avoid doing something.
UBI is unique. When I get a raise at work, my landlord doesn’t know. If you implement UBI, every landlord knows that every tenant in the whole country has $xxxx more per month to pay.
Literally 100% of them will raise the rent and there won’t be anything anyone can do about it.
I don't think it is this simple. Even with UBI, there will be varying quality of rentals, with nicer ones being more expensive. If every landlord jacked up the price, demand would shift to cheaper, lower quality rentals. More people will get roommates etc, reducing demand entirely.
In addition, every seller of a good/service could do the same. They can't all increase prices to extract the full $xxxx a month. There are much more complex dynamics at play then just "landlords will raise rent enough to extract the full UBI benefit."
This is only true in places where there are more people trying to rent than places.
In theory, having more capital available in the face of a landlord raising rent an obnoxious amount will incentivize people who aren't making much to move somewhere with a lower CoL that they might not have been able to make work otherwise because of uncertainty in the amount of time they'd be out of work or their base level of money available for that time.
This is only a problem when you have very limited housing supply, so you need to combine it with things like better housing/zoning policies and rent control.
FWIW, some of us still do this in C programs today. Having a relatively unique prefix for struct members makes it extremely easy to find uses of those members with relatively simple tools like cscope.
I think basically everyone with ADHD discovers this eventually; e.g.,
> Sympathetic Procrastination Rotor: a technique for Time and Task Management.
> To aid in the fight against procrastination, arrange all of your tasks in a cycle, such that the natural opportunity for procrastination is always another task on the roadmap. In this essay I will
reply