I'm always in the market for new sql tooling, but I'm wondering what's the use case there?
Isn't it much quicker to write a one line migration vs copying the DDL, then adapting it to the desired state and then getting getting the migrations from that tool? Or am I misunderstanding something?
Be sure to look at the actual sqldef command-line tool, not the trivial copy-and-paste demo on their website. Declarative schema management is best used combined with a Git repo.
In the big picture, declarative schema management has lots of advantages around avoiding/solving schema drift, either between environments (staging vs prod) or between shards in a sharded setup (among thousands of shards, one had a master failure at an inopportune time).
It's also much more readable to have the "end state" in your repo at all times, rather than a sequence of ALTERs.
Yes, we've used skeema for this for many years, and it is just plain lovely. Putting into source control your desired end state is so much more intuitive and understandable than accumulating migrations. In a way it's like the difference between jQuery and React -- you just say how you want it to look like in the end, and the computer does the work to figure out how to make it so.
Not sure what that means, but it's named partially as a nod to Skee-Ball. The town I grew up in was actually the home of the factory where all Skee-Ball machines were made.
I was using a location-related naming scheme in general at that time; similarly my automation library was called Go La Tengo because I was living in the town where the band Yo La Tengo was from.
Out of curiosity, the post you linked mentions that it won't work for renames. What's the approach for these and other types of procedural migrations, such as data transformations (ie: splitting a column, changing a type, etc.)
With a declarative model, would you run the migration and follow immediately with a one off script?
For both data migrations and renames, there isn't really a one-size-fits-all solution. That's actually true when doing data changes or renames in imperative (incremental) migrations tools too; they just don't acknowledge it, but at scale these operations aren't really viable. They inherently involve careful coordination alongside application deploys, which cannot be timed to occur at the exact same moment as the migration completion, and you need to prevent risk of user-facing errors or data corruption from intermediate/inconsistent state.
With row data migrations on large tables, there's also risk of long/slow transactions destroying prod DB performance due to MVCC impact (pile-up of old row versions). So at minimum you need to break up a large data change into smaller chunked transactions, and have application logic to account for these migrations being ongoing in the background in a non-atomic fashion.
That all said, to answer from a mechanical standpoint of "how do companies using declarative schema management also handle data migrations or renames":
At large scale, companies tend to implement custom/in-house data migration frameworks. Or for renames, they're often just outright banned, at least for any table with user-facing impact.
At smaller scale, yeah you can just pair a declarative tool for schema changes with an imperative migration tool for non-schema changes. They aren't really mutually exclusive. Some larger schema management systems handle both / multiple paradigms.
There's a lot of variety in the residential proxy market. Some are sourced from bandwidth sharing SDKs for games with user consent, some are "mislabeled" IPs from ISPs that offer that as a product and then there's a long tail of "hacked" devices. Labeling them generally as sketchy seems wrong.
> Some are sourced from bandwidth sharing SDKs for games with user consent...
The notion that most people installing a game meaningfully consent to unspecified ongoing uses of their Internet connection resold to undeclared third parties gave me a good, hearty belly laugh. Especially expressed so matter-of-factly.
When a game shows an unskippable ad, the user is consciously aware of what is happening, as it is happening, and can close the program to stop watching the ad. It is in no sense comparable to what you describe.
When a third party library bundled into a game makes ongoing, commercial, surreptitious use of the user's Internet access, the vast majority of users aren't meaningfully consenting to that use of their residential IP and bandwidth because they understand neither computers nor networks well enough to meaningfully consent.
I don't doubt your bases are sufficiently covered in terms of liablities. I don't doubt that some portion of whatever EULA you have (that your users click right on past) details in eye-watering legalese that you are reselling their IP and bandwidth.
It's just... The notion that there has been any meeting of minds at all between your organization and its games' users on the matter of IP address and bandwidth resale is patently risible.
Glad I'm not the only one. It seems to use {popular website without tld}@example.com as a pattern, so I'm getting a lot via my catch all address even if I haven't used the specific inbox yet.
That is most of the "productivity" bubble, with AI or not. You are trying to fit everything into tightly defined processes, categories and methodologies to not have to actually sit down and do the work.
I think they should have requested KYC when I was complaining about being unable to log into gmail, but I’m not going to complain as long as the service works.
I don’t use Luminati for anything illegal though, so it’s possible they just have some super amazing abuse detection algorithms that know this.
Won’t work for any popular site. You can try that easily by using extensions to set the user agent. If you are not checking the public list of IPs that Google publishes for the crawler you are doing it wrong.
Isn't it much quicker to write a one line migration vs copying the DDL, then adapting it to the desired state and then getting getting the migrations from that tool? Or am I misunderstanding something?
reply