This. The fact that democracy is up against an extremely organized, centralized, and well resourced effort decades in the making with seemingly nothing comparable to combat it has those opposing this on completely reactive footing.
It is hard to see how a reactive group can come out on top in such a case.
The problem is we got rid of "democracy" a long time ago.
The original premise was you have a lot of elected officials and then they act as checks and balances on one another. So, for example, to pass a law against something it has to be voted on by the House (elected officials), and the Senate (originally elected by state legislatures, giving the states, an independent elected body, a voice in the federal government; not anymore) and then signed by the President (another elected official), and then as a final check it had to be upheld by the courts (elected by the President and Senate for lifetime terms).
Then we effectively replaced most of that with administrative bureaucrats that act only within the executive branch. They're not only not directly elected, they're not even indirectly elected by the Senate; the President appoints them -- or they're hired by other unelected bureaucrats -- and then they tend to stick around between administrations because there are so many of them that you can't plausibly replace millions of people every time the constituents want to change who is in office.
Meanwhile they make the rules and enforce them and bypass the courts through coercive plea bargaining. But we call an attack on this system an attack on democracy?
This behavior should be an early warning sign of future potential enshitification and a reason to consider open weight models you can host elsewhere.
If you are building on models that could disappear tomorrow when a company needs to juice the launch of a new model (or increase prices), you are introducing avoidable risk.
Doesn't matter at all if the newer model is earth-shatteringly good (and this one doesn't seem to be): If I can't reliably access the models I've built my tooling on top of... I'm very unhappy.
If this note is just intended for the GUI chat interface they provide - Fine. I don't love it, but I get it.
But if the older models start disappearing from the paid API surfaces (ex - I can no longer get to a precise snapshot through something like "gpt-4o-2024-08-06" or "gpt-3.5-turbo-1106") then this is a great reason to abandon OpenAI entirely as a platform.
Eh, depending on the stress of the work, how much I enjoyed it, etc, $250M can buy a lot of convenience in life that lets you do it for as long as you want and that can be truly transformational generational wealth.
Roo Code has had Orchestrator mode doing this for a while with your models of choice. And you can tweak the modes or add new ones.
What I have noticed is the forcing function of needing to think through technical and business considerations of ones work up front, which can be tedious if you are the type that likes to jump in and hack at it.
For many types of coding needs, that is likely the smarter and ultimately more efficient approach. Measure twice, cut once.
What I have not yet figured out is how to reduce the friction in the UX of that process to make it more enjoyable. Perhaps sprinkling in some dopamine triggering gamification to answering questions.
Unfortunately as an early NMS player with hundreds of hours, I have seen nothing that gives me hope that LNF will have the depth that is needed for the world to feel like that. Mile wide, inch deep.
What made EQ an experience was those areas were static and took real skill to uncover how to do things.
I am curious if, with the number and quality of signals they can capture from this, how uniquely they can identify individuals and determine things like age, gender, weight, etc. Particularly when analyzed probabalistically with other household level data they likely have.
It is hard to see how a reactive group can come out on top in such a case.