How is GitHub/OpenAI ensuring this tech doesn’t throw the industry towards a flawed human/tech capability or dystopian of sorts?
Like… a bubble of developing code from yesterday’s code (plenty flawed itself) and with exponential growth based on feedback loops of self-fulfilling prophecy —— I’m assuming copilot (and every openai variant to come) will essentially retrain itself on new code developed,which overtime might all be code it wrote itself.
Did we just create a hamster wheel for the industry, or high-rise elevator?
I do think this turns into a modern stack overflow however. If observing local runtime errors, debugging process, and tidying/fixing code it produces itself. Plus train on all the SO questions out there.
Like… a bubble of developing code from yesterday’s code (plenty flawed itself) and with exponential growth based on feedback loops of self-fulfilling prophecy —— I’m assuming copilot (and every openai variant to come) will essentially retrain itself on new code developed,which overtime might all be code it wrote itself.
Did we just create a hamster wheel for the industry, or high-rise elevator?