To the contrary, I did not even send this post to my mailing list. It wasn't exactly a throwaway post but it was something more like that. A post I didn't expect anyone to care much about.
Sorry, I shouldn't have presumed. But the prior probability these days is so high. And I don't blame anyone for doing what they need to do get attention, especially if it is putting food on their table.
In principle you could use fiberchannel to connect a really large number (2²⁴ iirc) of disks to a single server and then create a single ZFS pool using all of them. This lets you scale the _storage_ as high as you want.
But that still limits you to however many requests per second that your single server can handle. You can scale that pretty high too, but probably not by a factor of 2²⁴.
> [rqlite and dqlite] are focused on increasing SQLite’s durability and availability through consensus and traditional replication. They are designed to scale across a set of stateful nodes that maintain connectivity to one another.
Little nitpick there, consensus anti-scales. You add more nodes and it gets slower. The rest of the section on rqlite and dqlite makes sense though, just not about "scale".
Hey Phil! Also you're 100% right. I should use a different word than scale. I was meaning scale in the sense that they "scale" durability and availability. But obviously it sounds like I say they are scaling performance.
I've changed the wording to "They are designed to keep a set of stateful nodes that maintain connectivity to one another in sync.". Thank you!
I’ll nitpick you back: if done correctly, consensus can have quite positive scaling consensus groups can have quite a positive impact on tail latency. As the membership size gets bigger, the expectation on the tail latency of the committing quorum goes down assuming independence and any sort of fat tailed distribution for individual participants.
If folks would like to see more examples of databases built to teach oneself, they get shared on the /r/databasedevelopment subreddit not infrequently.
reply