To be smarter than human intelligence you need smarter than human training data. Humans already innately know right and wrong a lot of the time so that doesn't leave much room.
This is a very good point! I remember reading about AlphaGo and how they got better results training against itself vs training against historical human-played games.
So perhaps the solution is to train the AI against another AI somehow... but it is hard to imagine how this could extend to general-purpose tasks
Gentle suggestion that there is absolutely no such thing as "innately know". That's a delusion, albeit a powerful one. Everything is driven by training data. What we perceive as "thinking" and "motivation" are emergent structures.
Innately as in you are born with it, the DNA learned not us humans. We have no clue how the DNA learned to think other than "survival of the fittest", and that is the oldest AI training method in the book.