yeah i think they shot themselves in the foot a bit here by creating the o series. the truth is that GPT-5 _is_ a huge step forward, for the "GPT-x" models. The current GPT-x model was basically still 4o, with 4.1 available in some capacity. GPT-5 vs GPT-4o looks like a massive upgrade.
But it's only an incremental improvement over the existing o line. So people feel like the improvement from the current OpenAI SoTA isn't there to justify a whole bump. They probably should have just called o1 GPT-5 last year.
You cannot even access the other models any more from the app. This is a huge bummer that is having me consider other brands. I don't trust gpt-5 yet, but I do trust 4.1 and most of my in-progress conversations are 4.1 based.
GPT-5 hasn't landed for me yet, but this has been my thought process too. This seems like a moment potentially equivalent to when Google got lowest-common-denominator-ed, when it stopped respecting your query keywords and doing "smart" things. If GPT-5 in practice turns out to be similarly optimized for lowest common denominator usage at the cost of precise controls over models, that'll be the thing that'll finally get me properly using Claude and Gemini and local models regularly.
The jump from 3 to 4 was huge. There was an expectation for similar outputs here.
Making it cheaper is a good goal - certainly - but they needed a huge marketing win too.