Please don't use AI to rewrite your writing. I don't want to read generated text. At this rate, I'm just going to refuse to read anything written after 2022. You should be required to disclose the tools you use to write.
Yes, I do see this as significantly different than human editing and than traditional spell check. Please stop doing this.
Plenty of people have English as a second language. Having an LLM help them rewrite their writing to make it better conform to a language they are not fluent in feels entirely appropriate to me.
I don't care if they used an LLM provided they put their best effort in to confirm that it's clearly communicating the message they are intending to communicate.
On the contrary, I've found Simon's opinions informative and valuable for many years, since I first saw the lightning talk at PyCon about what became Django, which IIRC was significantly Simon's work. I see nothing in his recent writing to suggest that this has changed. Rather, I have found his writing to be the most reliable and high-information-density information about the rapid evolution of AI.
Language only works as a form of communication when knowledge of vocabulary, grammar, etc., is shared between interlocutors, even though indeed there is no objectively correct truth there, only social convention. Foreign language learners have to acquire that knowledge, which is difficult and slow. For every "turn of phrase" you "enjoy" there are a hundred frustrating failures to communicate, which can sometimes be serious; I can think of one occasion when I told someone I was delighted when she told me her boyfriend had dumped her, and another occasion when I thought someone was accusing me of lying, both because of my limited fluency in the languages we were using, French and Spanish respectively.
There is a big difference between the above 'request' and, say, me politely asking the time of a complete stranger I walk by on the street.
Requests containing elements of hostility, shame, or injury frequently serve dual purposes: (1) the ostensible aim of eliciting an action and (2) the underlying objective of inflicting some from of harm (here shame) as a means compelling compliance through emotional leverage. Even if the respondent doesn't honor the request, the secondary purpose still occurs.
These are good points, but I think they represent a somewhat narrow view of the issue. What's happening here is that we're discussing among ourselves what kinds of actions would be good or bad with respect to AI, just as we would with any other social issue, such as urban development, immigration, or marital infidelity. You could certainly argue that saying "please don't replace wetlands with shopping malls" or "please don't immigrate to the United States" has "the underlying objective of inflicting some from of harm (here shame) as a means [of] compelling compliance through emotional leverage."
But it isn't a given that this will be successful; the outcome of the resulting conversation may well be that shopping malls are, or a particular shopping mall is, more desirable than wetlands, in which case the ostensible respondent will be less likely to comply than they would have been without the conversation. And, in this case, it seems that the conversation is strongly tending toward favoring the use of things like Grammarly rather than opposing it.
So I don't oppose starting such conversations. I think it's better to discuss ethical questions like this openly, even though sometimes people suffer shame as a result.
Does this extend to the heuristic TFA refers to? Where they end up (voluntarily or not) referring to what LLMs hallucinate as a kind of “normative expectation,” then use that to guide their own original work and to minimize the degree to which they’re unintentionally surprising their audience? In this case it feels a little icky and demanding because the ASCII tablature feature feels itself like an artifact of ChatGPT’s limitations. But like some of the commenters upthread, I like the idea of using it for “if you came into my project cold, how would you expect it to work?”
Having wrangled some open-source work that’s the kind of genius that only its mother could love… there’s a place for idiosyncratic interface design (UI-wise and API-wise), but there’s also a whole group of people who are great at that design sensibility. That category of people doesn’t always overlap with people who are great at the underlying engineering. Similarly, as academic writing tends to demonstrate, people with interesting and important ideas aren’t always people with a tremendous facility for writing to be read.
(And then there are people like me who have neither—I agree that you should roll your eyes at anything I ask an LLM to squirt out! :)
But GP’s technique, like TFA’s, sounds to me like something closer to that of a person with something meaningful to say, who now has a patient close-reader alongside them while they hone drafts. It’s not like you’d take half of your test reader’s suggestions, but some of them might be good in a way that didn’t occur to you in the moment, right?
Yes, I do see this as significantly different than human editing and than traditional spell check. Please stop doing this.