So this is what happens when you're owned by Microsoft who has an exclusive contract with OpenAI.
A couple weeks ago I ran a few experiments with AI-based code generation (https://news.ycombinator.com/item?id=27621114 ) from a GPT model more suitable for code generation: it sounds like this new "Codex" model is something similar.
If anyone from GitHub is reading this, please give me access to the Alpha so I can see what happens when I give it a should_terminate() function. :P
I saw that post, neat stuff. We made an attempt to develop something similar 4 years ago and take it to YC, it simply wasn't good enough often enough because our training data (Stack Overflow posts) was garbage and models were weaker back then. I figured it would take about 5 years for it to really be useful given the technology trajectory, and here we are.
I'll note that we weren't trying to build "code auto-complete" but instead a automated "rubber duck debugger" which would function enough like another contextually-ignorant but intelligent programmer that you could explain your issues to and illuminate the solution yourself. But we did a poor job of cleaning the data and we found that English questions started returning Python code blocks, sometimes contextually relevant. It was neat. This GitHub/OpenAI project is neater.
I would be curious what the cost of developing and running this model is though.
A couple weeks ago I ran a few experiments with AI-based code generation (https://news.ycombinator.com/item?id=27621114 ) from a GPT model more suitable for code generation: it sounds like this new "Codex" model is something similar.
If anyone from GitHub is reading this, please give me access to the Alpha so I can see what happens when I give it a should_terminate() function. :P