It may be the reality in your mind but it is not the reality in the actual real world. Don't expect to be taken seriously by anyone outside your political bubble until you learn to communicate your concerns using reasonable language.
I think you're absolutely on point and right that you shouldn't be listening to me, a potential internet nobody. There's no way you could tell me or anyone here apart from an AI at this point.
You should be listening to actual reporters, such as Jonathan Rauch from The Atlantic: https://archive.is/Maalh "Yes, it's Fascism"
In the current state, it's basically just a self verification. When you use a new device it shows a series of emoji on each device and asks you if they're the same, then the device is verified.
Just to piggyback off this I had a similar thought. I read your post about the tool rental, got to the page and immediately saw random articles which unfortunately this AI age has got me to distrust that things are human written/curated when just presented with no context.
Seeing the rental and more community features would be best, then when you like the concept/community it makes sense to get invested in the posted articles because you've seen the site is active with people.
We’re working on bringing the rental and community features to the forefront so it’s clear from the start what Patio is about. In a world flooded with AI content, we get that leading with articles can feel impersonal without context.
The goal is to build trust through people and tools first — then let the content support that experience. Thanks for the kind words and thoughtful feedback!
Don't blame them, they just asked their agentic AI to make a successful site for renting tools. A Show HN post, and engagement in the comment section is a required step.
Can you further elaborate on this robots.txt? I was under the impression most AI just completely ignores anything to do with robots.txt so you may just be hitting the ones that are maybe attempting to obey it?
I'm not against the idea like others here seem to be, I'm more curious about implementing it without harming good actors.
If your robots.txt has a line specifying, for example
Disallow: /private/beware.zip
and you have no links to that file from elsewhere on the site, then if you get a request for that URL it was because someone/something read the robots.txt and explicitly violated it, then you can send it a zipbomb or ban the source IP or whatever.
But in my experience it isn't the robots.txt violations being so flagrant (half the requests are probably humans who were curious what you're hiding, and most bots written specifically for LLMs don't even check the robots.txt). The real abuse is the crawler that hits an expensive and frequently-changing URL more often than reasonable, and the card-testers hitting payment endpoints, sometimes with excessive chargebacks. And port-scanners, but those are a minor annoyance if your network setup is decent. And email spoofers who bring your server's reputation down if you don't set things up correctly early on and whenever changing hosts.
The fact that every single person that brings this up is met with a form of "we don't see a problem, you're just reading it wrong" is a bad sign for the future of this project.
reply