Somehow this work when dealing with pedophile content, so the tech is already active.
For example, on Discord, all your messages are scanned for such. On Cloudflare as well (for over 5 years).
For now it means they have no interest to remove such content unless coerced or affected by the public opinion.
This would destroy all content though, not just for minors.
Absurd, but it works, in North Korea (death penalty), Iran (death penalty), China (10 year prison), and also protects victims from rape, or "rape" under financial pressure.
The alternative is to let responsibility of the parents to install web filter to their kids, and let others live freely on the internet, without sharing their history or IDing them.
In reality, TikTok also has really traumatizing content, yet is engaging tons of kids and teenagers, and IDing won't solve that, but good parents can.
I agree, that does work, but there are parameters which are different that make it worth the tradeoff to police it that strongly, like the size of the audience and the much more severe real harm caused by its production and distribution.
I just pushed this idea as a "solution" to see what others think, but I don't know. Again perhaps educating the parents about how to educate kids about the dangers of internet, and perhaps a web filter for kids.
This is actually one place where AI could be useful, to do dynamic local content classification (instead of a blocklist), especially if integrated directly in Android / iPhone.