There is a big difference between the above 'request' and, say, me politely asking the time of a complete stranger I walk by on the street.
Requests containing elements of hostility, shame, or injury frequently serve dual purposes: (1) the ostensible aim of eliciting an action and (2) the underlying objective of inflicting some from of harm (here shame) as a means compelling compliance through emotional leverage. Even if the respondent doesn't honor the request, the secondary purpose still occurs.
These are good points, but I think they represent a somewhat narrow view of the issue. What's happening here is that we're discussing among ourselves what kinds of actions would be good or bad with respect to AI, just as we would with any other social issue, such as urban development, immigration, or marital infidelity. You could certainly argue that saying "please don't replace wetlands with shopping malls" or "please don't immigrate to the United States" has "the underlying objective of inflicting some from of harm (here shame) as a means [of] compelling compliance through emotional leverage."
But it isn't a given that this will be successful; the outcome of the resulting conversation may well be that shopping malls are, or a particular shopping mall is, more desirable than wetlands, in which case the ostensible respondent will be less likely to comply than they would have been without the conversation. And, in this case, it seems that the conversation is strongly tending toward favoring the use of things like Grammarly rather than opposing it.
So I don't oppose starting such conversations. I think it's better to discuss ethical questions like this openly, even though sometimes people suffer shame as a result.