The mainstream media reported on Iryna’s murder as soon as there was video, but it has been a constant subject of reporting in Charlotte since it happened with immediate political ramifications.
I don’t read Twitter, but I do read my local news. I’m not quite sure that anyone is better off now that her murder is being nationally reported, to be honest.
It took CNN three days after the video was already circulating and getting huge traction; before they reported on it themselves.
four days for NYT.
This is after the video had already been circulating for days and received a lot of attention, the killing happened on August 22nd and the video has been going around since the 5th of september: https://www.mediaite.com/media/conservatives-call-out-media-...
These same outlets reported on George Floyds death effectively immediately.
Mark Duggan was shot in London and the US MSM picked it up faster.
Not aware of anything regarding local news, but when one killing reaches international news and the other has to be already organically international news via social media before reporting happens: people start to make presumptions.
Maybe the FS was upstreamed too soon, causing your development velocity and the high expectations you have for your end users to be at odd with the well entrenched workflows of Linux maintainers.
In any case as a Linux user, I want to thank you for your work and for your code which taught me a lot.
I hope it didn’t take too much of a toll on you.
Let’s hope that with the recent stabilization, the maintenance will be easier.
Well, being upstream has not been a rosy experience, so that'd be an easy judgement to make in hindsight.
But consider: ext4 was done entirely in tree, and btrfs was upstreamed much, much earlier and took a lot longer to stabilize.
So compared to historical precedent, we already upstreamed much later. e.g. we upstreamed long, long after we stopped making breaking changes in the on disk format (that was the point where btrfs removed the experimental label!)
If we're saying even that is too soon, then we're saying that bcachefs shouldn't have been upstreamed until it was perfect. But, no one was ever going to fund developing a filesystem from scratch all the way to completion and perfection completely out of tree. That's far too much uncertainty, and that kind of money simply isn't being thrown around in the Linux filesystem world.
Asking a filesystem to only be merged when it's completely done and perfect is saying "we want all the benefit and none of the pain", and it's just fundamentally unrealistic.
The whole point of Linux is community based development, and that's how I've been developing bcachefs. I don't have a big engineering team - but what I do have is a large community of people doing very good QA work that I work with closely, on a daily basis. People show up from anywhere with bugs, I ask them to join the IRC channel, and we start working together and it goes from there; a lot of people see us doing productive work and stick around and find ways to help out.
If that no longer works within the development model of the Linux kernel... oi vey.
> The whole point of Linux is community based development
You contradict yourself too much. You ignore feedback about not working well with others, and whenever someone wants to contribute, you shut them down by claiming you're the expert. This makes it seem like you're more focused on attracting investment than on actual collaboration.
Maybe then you should have considered this file system as truly experimental and expected your end users to make frequent backups. And advertise it as such. You could also have some kind of dkms bleeding edge module for your users to test fixes before they reach the kernel.
This way you wouldn’t be so preoccupied about getting code as fast as possible in the kernel.
No, a lot of bcachefs users are explicitly using it because of data loss, and they needed something more reliable; that's bcachefs's main reason for existing.
Besides that, if you want to make anything truly reliable, you have to prioritize reliability at every step in the process of building it. That means actively supporting it, getting feedback and making sure it's working well, and you can't do that without shipping bugfixes.
Having to ship a DKMS module just so users can get the latest bugfixes would be nutso - that's just not how things are done.
Some distributions do release hotfixes before they reach mainline kernel.
If you expect end users to compile Linus’ kernel head, why couldn’t they compile your branch though ? They get timely fixes.
Calling other people’s work “garbage” is not conclusive proof of arrogance, if that work is actual garbage. Linus has calmed down a lot over the years, yet at this point if he still has issues with somebody’s work, it’s probably best to listen rather than calling that arrogance.
Personally, I do believe the quality of Linux kernel has a lot to do with having a steward able to be firm and opinionated, rather than adopting a passive anglo management style where confrontation is avoided at all costs.
And critique of the work is expected when the quality of the work is the question. But all the problems here are about conduct, which doesn't seem to get through.
Definitely agree that goroutines don't suck; it makes go into one of the only languages without "function coloring" problem; True N:M multithreading without a separate sync and async versions of the IO libraries (thus everything else).
I think channels have too many footguns (what should its size be? closing without causing panics when there are multiple writers), thus it's definitely better "abstracted out" at the framework level. Most channels that developers interact with is the `Context.Done()` channel with <-chan struct{}.
Also, I'm not sure whether the go authors originally intended that closing a channel would effectively have a multicast semantics (all readers are notified, no matter how many are); everything else have pub-sub semantics, and turns out that this multicast semantics is much more interesting.