It's also slow for bulk operations like "mv somefolder/ ..." - it processes each file one at a time rather than a batch operation (I tried it out recently and this was one thing that stuck out).
I was using Python's Pelican static site generator for some time until I wanted to further customize the template fragments of a theme. Started running into issues and even helped fix a bug with the build command. Eventually I couldn't be bothered and wrote my own static site, except with Nextjs instead of plain HTML. Didn't take long and I don't have to mess around with awkward jinja templates anymore.
Similarly I started out with Pelican but eventually needed more fine control over the site using MDX/etc. so I migrated my site over to Astro and have been pretty happy with it.
I wanted to apply OCR to my company's invoicing since they basically did purchasing for a bunch of other large companies, but the variability in the conversion was not tolerable. Even rounding something differently could catch an accountant's eye, let alone detecting a "8" as a "0" or worse.
When I first started using Cursor the default behavior was for Claude to make a suggestion in the chat, and if the user agreed with it, they could click apply or cut and paste the part of it they wanted to use in their larger project. Now it seems the default behavior is for Claude to start writing files to the current working directory without regard for app structure or context (e.g., config files that are defined elsewhere claude likes to create another copy of). Why change the default to this? I could be wrong but I would guess most devs would want to review changes to their repo first.
Cursor has two LLM interaction modes, chat and composer. The chat does what you described first and composer can create/edit/delete files directly.
Have you checked which mode you're on? It should be a tab above your chat window.
One of my best contributions was a 1 line fix in one of Microsoft's repos. It took me a day or two to understand the code and what it was doing but it was well worth it.
I think Claude trades quality for speed. From what I've seen it starts generating almost immediately even with a large token window. For smaller changes it is usually good enough, but larger changes are where I bump into issues as well. I'll stick to using change [somefunction] rather than change entire file.
I tend to iterate and limit the output by eg saying “make the smallest change possible, don’t rewrite the file, tell me what you want to do first” etc. It seems to respond well to change requests, with much apology. ChatGPT berates me about its own code and keeps saving shit I don’t want in memory, so I have to go back and clean up stuff like "Is testing Lambda functions directly in the AWS web console" and "Is working with testing and integration test fixtures in their software projects" when those are 2 of 100 things I'm doing. I'm using SAM for lambda, I might have run one in the console to bypass the API and now it's saved it as gospel. Half the benefit with LLM's is that they forget context when you start a new chat, so you can control what it focuses on.
I started writing my resume in LaTeX years ago. There are a couple templates on Overleaf that are very similar, there seems to be a convergence on the standard "engineering resume".
I made https://resumai.co/convert to allow others to use the template. It works fairly well with most inputs. It's a free tool!