Not looking forward to dealing with this from a security point of view. It's difficult to get developers to accept responsibility for security vulnerabilities in libraries they've selected for their project ("That's not my code!"). I can see the same thing happening with generated code where they don't want to take responsibility for finding a way to remediate any vulnerabilities they didn't personally type in. Of course those who exploit the vulnerabilities won't care how it got into the code. They're just happy they're able to make use of it.
It says it's trained on "billions of lines of code"
I would augment that to "billions of lines of code that may or may not be safe and secure"
If they could tie in CodeQL into Copilot to ensure the training set only came from code with no known security concerns, that would be a big improvement.