I’ve had a lot of success rendering svg charts via Airbnb’s visx on top of React Server Components, then sprinkling in interactivity with client components. Worth looking into if you want that balance.
It’s more low level than a full charting library, but most of it can run natively on the server with zero config.
I’ve always found performance to be kind of a drag with server side dom implementations.
If you have a lot of pages, AI bots will scrape every single one on a loop - wiki's generally don't have anywhere near the number of pages as an incremented entity primary id. I have a few million pages on a tiny website and it gets hammered by AI bots all day long. I can handle it, but it's a nuisance and they're basically just scraping garbage (statistics pages of historical matches or user pages that have essentially no content).
Many of them don't even self-identify and end up scraping with shrouded user-agents or via bot-farms. I've had to block entire ASNs just to tone it down. It also hurts good-faith actors who genuinely want to build on top of our APIs because I have to block some cloud providers.
I would guess that I'm getting anywhere from 10-25 AI bot requests (maybe more) per real user request - and at scale that ends up being quite a lot. I route bot traffic to separate pods just so it doesn't hinder my real users' experience[0]. Keep in mind that they're hitting deeply cold links so caching doesn't do a whole lot here.
[0] this was more of a fun experiment than anything explicitly necessary, but it's proven useful in ways I didn't anticipate
How many requests per second do you get? I also see a lot of bot traffic but nowhere near to hit the servers significantly, and i render most stuff on the server directly.
Around a hundred per second at peak. Even though my server can handle it just fine, it muddies up the logs and observability for something I genuinely do not care about at all. I only care about seeing real users' experience. It's just noise.
I don’t understand this conclusion. Why shouldn’t it be a business? Doesn’t it create value? Hasn’t the nature of being a business led to far more maturity and growth in a FOSS offering than if it had been a side project? Just because it can’t afford 8 full time salaries now doesn’t declare it a failure. Your conclusion is that value should be created without any capture.
It wasn’t venture scale and never intended to be venture scale. By any metric you have, it’s a very successful business and has made its creators independent and wealthy as you pointed out.
I agree this is your worldview warping your perception. But I’d argue we need far more tailwinds and far less whatever else is going on. It captured millions in value - but it generated tens, or hundreds of millions, or more. And essentially gave it away for free.
I think a better conclusion is that it’s a flawed business model. In which case, I’d agree - this didn’t come out of nowhere. The product created (TailwindUI) was divorced from the value created (tailwindcss). Perhaps there was a better way to align the two. But they should be celebrated for not squeezing the ecosystem, not vilified. Our society has somewhat perverse incentives.
Fellow bootstrapper checking in. Made an ardent but delicate decision in 2015 not to raise money and ten years later I'm chugging along full-time on my business. Infinitely glad to have chosen this path. It was the right one for me.
Same, I learned to code to build software for my business. I turned it into a SaaS and turned down investment because my original business was already profitable and making the SaaS better was making the business better.
This is why I trust software made by people who come from that industry or have some background. I’ve seen too many startups where the founders are fresh out of college and have never worked in that niche and have never been in the shoes of the people they are trying to sell to. To me, that just means they are trying to get big and get acquired, I’m just a means to that end.
Yup, I echo this sentiment. We're about to flourish.
It's never been cheaper and easier to build real value. It's also never been cheaper and easier to build real crap–but, the indie devs who care will build more value with higher velocity and independence. And good indie development will come with it an air of quality that the larger crap will struggle to compete with (at the edges). Not that they'll care, because the big players be making more money off the entrenched behemoths.
But as an indie dev, your incentive structures are far different and far more manageable.
Betteridge's law applies here – if the author truly believed the thesis, they would have declared it as a statement rather than a question.
It doesn’t really matter if they’re using commercial VPNs or the same upstream providers as commercial VPNs. Blocking an ASN is a million times more effective than blocking single IPs (at the risk of blocking genuine customers). I’ve had customers reach out to me asking to be unbanned after I blocked a few ASNs that had hostile scrapers coming out of them. It’s a tough balance.
VPNs often use providers with excellent peering and networking - the same providers that scrapers would want to use.
Hilariously – react server components largely solves all three of these problems, but developers don't seem to want to understand how or why, or seem to suggest that they don't solve any real problems.
I agree though worth noting that data loader patterns in most pre-RSC react meta frameworks + other frameworks also solve for most of these problems without the complexity of RSC. But RSC has many benefits beyond simplifying and optimizing data fetching that it’s too bad HN commenters hate it (and anything frontend related whatsoever) so much.
I have tailscale running on my robot vacuum. It's my own little autonomous mesh vpn node that lets me connect back to my home network when I'm on the go.
You can root certain models of robot vacuums and then ssh into them. Most run some variant of linux. Then just install tailscale. There are a few blogs out there of people who have done it[0][1].
It's taking a cloud-based product, de-clouding it, and then connecting it to your own private 'cloud'. Pretty cool all things told.
They lend you optionality of when and where you want your code to run. Plus it enables you to define the server/client network boundary where you see fit and cross that boundary seamlessly.
It's totally fine to say you don't understand why they have benefits, but it really irks me when people exclaim they have no value or exist just for complexity's sake. There's no system for web development that provides the developer with more grounded flexibility than RSCs. I wrote a blog post about this[0].
To answer your question, htmx solves this by leaning on the server immensely. It doesn't provide a complete client-side framework when you need it. RSCs allow both the server and the client to co-exist, simply composing between the two while maintaining the full power of each.
But is it a good idea to make it seamless when every crossing of the boundary has significant implications for security and performance? Maybe the seam should be made as simple and clear as possible instead.
Just because something is made possible and you can do it doesn't mean you should!
The criticism is that by allowing you to do something you shouldn't, there isn't any benefit to be had, even if that system allows you to do something you couldn't before.
No, he clearly points that anyone else would have to be taken off their existing work and would have to context-switch to the context he already has. That's not trashing his engineering team.
It’s more low level than a full charting library, but most of it can run natively on the server with zero config.
I’ve always found performance to be kind of a drag with server side dom implementations.
reply