I think Anthropic currently has a slight edge for coding, but this is changing constantly with every new model. For business applications, where tool calling and multi-modality matter a lot, OpenAI is and always has been superior. Only recently Google started to put some small dents in their moat. OpenAI also has the best platform, but less because it is good and more because Google and Anthropic are truly dismal in every regard when it comes to devx. I also feel like Google has accrued an edge in hard-core science, but that is just a personal feeling and I haven't seen any hard data on this yet.
The one thing that really surprised me, the thing I learned that's affected my conversational abilities the most: turn taking in conversation is a negotiation: there are no set rules. There are protocols:
- bids
- holds / stays
- implications (semantic / prosodic)
But then the actual flow of the conversation is deeply semantic in the best conversations, and the rules are very much a "dance" or a negotiation between partners.
That's an interesting way to think about it, I like that.
It also implies that being the person who has something to say but is unable to get into the conversation due to following the conversational semantics is akin to going to a dance in your nice clothes but not being able to find a dance partner.
Yeah, I can relate to that. Maybe it's also because you are too shy to ask someone to dance. I think I learned that lesson: just ask, and be unafraid to fail. Things tend to work themselves out. Much of this is experimentation. I think our models need to be open to that: which is one cool thing about Sparrow-1: it's a meta-in-context learner. This means that when it try's and fails, or you try and fail, it learns at runtime to adapt.
I use Claude Code for everything, and I love Anthropic's models. I don't know why, but it wasn't until reading this that I realized: I can use Sparrow-1 with Anthropic's models within CVI. Adding this to my todo list.