There is a big difference between the above 'request' and, say, me politely asking the time of a complete stranger I walk by on the street.
Requests containing elements of hostility, shame, or injury frequently serve dual purposes: (1) the ostensible aim of eliciting an action and (2) the underlying objective of inflicting some from of harm (here shame) as a means compelling compliance through emotional leverage. Even if the respondent doesn't honor the request, the secondary purpose still occurs.
These are good points, but I think they represent a somewhat narrow view of the issue. What's happening here is that we're discussing among ourselves what kinds of actions would be good or bad with respect to AI, just as we would with any other social issue, such as urban development, immigration, or marital infidelity. You could certainly argue that saying "please don't replace wetlands with shopping malls" or "please don't immigrate to the United States" has "the underlying objective of inflicting some from of harm (here shame) as a means [of] compelling compliance through emotional leverage."
But it isn't a given that this will be successful; the outcome of the resulting conversation may well be that shopping malls are, or a particular shopping mall is, more desirable than wetlands, in which case the ostensible respondent will be less likely to comply than they would have been without the conversation. And, in this case, it seems that the conversation is strongly tending toward favoring the use of things like Grammarly rather than opposing it.
So I don't oppose starting such conversations. I think it's better to discuss ethical questions like this openly, even though sometimes people suffer shame as a result.
A conversation on the topic certainly did ensue; see https://news.ycombinator.com/item?id=44492524 and https://news.ycombinator.com/item?id=44493015. Perhaps you mean to say that this wasn't the intended effect? But it was at least a highly predictable effect. Perhaps it would have gone better for the flamer if they had made the request without flaming not only the author in question and simonw.
To me the request in question seems to be in the same spirit as "Please don't play your music so loud at night", "Please don't look at my sister", or "Please don't throw your trash out your car window". In each of these cases, there's clearly a conflict between different people's desires, probably accompanied with underlying disagreements about relevant duties; perhaps one person believes the other has a duty to avert their gaze from the sister in question to show respect to her chastity, while their interlocutor does not subscribe to any such duty, believing he is entitled to look at whomever he pleases. Or perhaps one person believes the other has a duty to carry their trash to a trash can, while the other does not.
Given that such a conflict has arisen, how can we resolve it? We could merely refrain from trying to influence one another's behavior entirely, which is the lowest-effort approach, but this clearly leads to deeply suboptimal outcomes in many cases; perhaps the cost of turning down the stereo or carrying the garbage to a trash can would be almost trivial, so doing it to accommodate others' preferences results in a net improvement in welfare. Alternatively, we could try to exclude people whose normative beliefs differ from our own from the spaces that most affect us, but it should be obvious that this also often causes harms far out of proportion from the good that results, such as ethnic cleansing.
All the other approaches to resolving the conflict that I can think of—bargaining, mediation, arbitration, collective deliberation, etc.—begin unavoidably with stating the unfulfilled desire. Or, as you put it, hectoring someone to 'stop doing this'.
There's no analogy or wall of text that makes that comment unshitty and inviting of conversation. It's not a thing one should do on HN because it trashes the place. We resolve this by striving to control our own reflexive dickishness and downvoting/flagging the egregiously dickish comments, which is exactly what happened here.
I agree that it's a dickish, shitty comment, and uninviting of conversation. I don't agree that the reason it's dickish is that comments of the form "Please don't do such and such" are inherently dickish. I think that such comments are uncomfortable but necessary, and tabooing discussion of such conflicts does more harm than good—in addition to the reasons above, it would ensure that only the most disagreeable commenters dare to make them.
Undoubtedly, if you devote the minute and a half required to read my "wall of text" comment above, you will be persuaded by its reasoning.
I fixed the confusing bit, thanks. I'm not persuaded by the reasoning because I don't see how the reasoning is relevant - we're talking about a specific dickish comment in a specific social place with its specific norms. These are so well understood and established the comment got flagblasted by users and moderator scolded on top of that - effectively the maximum penalty/public shaming an HN comment can get. It's not a hypothetical different context in which some kind of hypothetical value eventually comes from such comments - the bad comment and the bad subthread are concretely in front of us.
I think the non-hypothetical value that came from this comment in this case is that it surfaced good reasons for writers to use generative AI and showed that many people support doing so. I would have liked to see that happen in a much more civil fashion, but I don't think it could happen at all without some openly stated form of the initial objection to writers using generative AI.
My concern is that the flagblasting and moderator-scolding, while certainly justified by the comment in question, will cause the collateral damage of discouraging politer versions of such comments in the future. So I think it's worthwhile to affirm that criticizing people's behavior to their face is not in fact inherently dickish, but rather a much better alternative to doing it behind their back, or to finding ways to silently exclude them, or people you suspect of being like them.
Requests containing elements of hostility, shame, or injury frequently serve dual purposes: (1) the ostensible aim of eliciting an action and (2) the underlying objective of inflicting some from of harm (here shame) as a means compelling compliance through emotional leverage. Even if the respondent doesn't honor the request, the secondary purpose still occurs.