Some users, like journalists and the visually impaired, disable JavaScript. If you can make a product that works for them, why not do it?
Some web clients, like TTY-based clients, ignore both JavaScript AND CSS. People using these clients rightly expect to get a worse web experience overall. But if you can make a functional web page for them, why wouldn’t you?
These both sound like relatively niche edge cases. But nobody knows how things will change in the future.
Thanks for touching on accessibility. I believe that good accessibility leads to a better design. The nice thing about starting with HTML is there are good defaults.
I wrote another post about building a search form about progressively enhancing a search form (https://jch.github.io/posts/2025-01-30-building-modern-searc...). Starting with semantic <search> and <input> elements gives sane browser and screenreader behavior.
Perhaps my title came on too strong, but I'm not advocating against javascript. It's more about understanding capabilities HTML and CSS can handle, and what is better suited for JS.
If I was to drop acid and hallucinate an alien invasion, and then suddenly a xenomorph runs loose around the city while I’m tripping balls, does being right in that one instance mean the rest of my reality is also a hallucination?
Because it seems the point being made multiple times that a perceptual error isn’t a key component of hallucinating, the whole thing is instead just a convincing illusion that could theoretically apply to all perception, not just the psychoactively augmented kind.
and then write a program which detects a randomly opened bookmark and closes it because i'm in the middle of something important and can't be bothered right now.
Not really the same, but I find bookmark folders to be pretty useful. I have a "daily", "weekly", "taxes", etc., and it's just one click to open them all up in one go. They could easily be converted into a cron job with a plaintext URL list like TFA.
I believe we can still do them for fun, for practice. But having them readily available (also, tested) when you are trying to build something is much better.
Both views have valid points. I guess it depends on how quickly you need to get something up and running. Battle-tested libraries definitely have their pros, but then again the surface area for possible bugs is also greater. For sure, most of my code in Rust for instance is quite complex and I wouldn't be able to do it without external crates. That said, in most projects I end up with 300+ indirect dependencies, most of which I don't even know anything about. It adds to compile time, final binary size and most importantly, I'm building my software on top of a huge stack of stuff I don't really know. I guess the higher level we go, the less we can avoid this anyway. Nevertheless some of my points don't necessarily apply to interpreted languages.
In academia, I see academics using ChatGPT to write papers. They get help with definitions, they even give the related work PDFs to it and make it write the part. No one fact checks. Students; they use it to write reports, homeworks and code.
GPT may be good for learning, but not for total beginners. That is key. As many people stated here, it can be good for those with experience. Those without, should seek for those experienced people. Then, when they have the basics they can get help from GPT to go further.