That's an interesting approach that I do think can improve some things, but I think the underlying problem is that incentives need to be changed. It's not only metrics, but just giving honest opinions: what use-cases do you really think your algorithm is suited for, not looking at it in the most optimistic possible light? If academia weren't as ultra-competitive as it's become in the past two decades or so, I think there would be more chances of getting honest and useful answers to such questions in papers. One still finds them sometimes in papers of people who don't have to play "the game" anymore: papers by senior full-professor types are often quite interesting because of how they can say what they really think.