To provide a different perspective: this was caught, acknowledged as a problem, rolled back quickly, and written up in a post-mortem.
Given that nothing can be done perfectly, this seems like the best way OpenAI could have handled it.
What if building AI is like airplanes: everybody's afraid of it, so the industry gets really good at rolling lessons into the product and the end result is actually fine?
Given that nothing can be done perfectly, this seems like the best way OpenAI could have handled it.
What if building AI is like airplanes: everybody's afraid of it, so the industry gets really good at rolling lessons into the product and the end result is actually fine?