I get the feeling if you really wanted a true transparency report, you would discuss government requests, but also prevalence & detection & restriction of bots, prevalence & detection & restriction of vote farming, true likelihood of random submissions doing anything, acts of reddit censorship and the underlying and reportable reasons for doing so, user tracking and sales & mining of user information.
I get the sense that nearly all social networks have a large financial incentive to either not talk about - or to minimise and downplay - the volume and impact of bots & manipulative actors.
The chance of this behaviour getting addressed is microscopic while these incentives remain as they are.
Traffic quality analysis is often far worse at a lot of these sorts of companies than you'd expect. I wouldn't be surprised if they didn't have an accurate analysis on the full scope of bots on their network.
Consider that if they study the problem they may discover the problem is severe and they can't fix it. Now they have an even more serious problem: If they lie about the existence of the report, they're defrauding advertisers. If they admit the contents of the report, advertisers and users will flee.
So it's rational for them to not look in the first place. If they don't know, then they're not technically lying. This is good for them, but nobody else.
This "transparency report" is similar to Google's "do no evil" thing, in the sense that it is purely a marketing/PR play, and otherwise completely and utterly meaningless.
I don't see 95% of that there.