The silent victory here is this seems like it is being built to be faster and cheaper than o3 while presenting a reasonable jump, which is an important jump in scaling law
On the other hand if it's just getting bigger and slower it's not a good sign for LLMs
Yeah, this very much feels like "we have made a more efficient/scalable model and we're selling it as the new shiny but it's really just an internal optimization to reduce cost"
Oh it's exciting, but not as exciting when sama pumps GPT-5 speculation and the market thinks we're a stones throw away from AGI, which it appears we're not.
On the other hand if it's just getting bigger and slower it's not a good sign for LLMs