Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's cloudflare and parasites like them that will make the internet un-free. It's already happening, I'm either blocked or back to 1998 load times be cause of "checking your browser". They are destroying the internet and will make it so only people who do approved things on approved browsers (meaning let advertising companies monetize their online activity) will get real access.

Cloudflare isn't solving a problem, they are just inserting themselves as an intermediary to extract a profit, and making everything worse.






How is Cloudflare a parasite? I can use Cloudflare, and get their AI protection, for free. I have dozens of domains I have used with Cloudflare at one point and I haven't paid them a dime.

A parasite leaches off it's host to the hosts harm. Maybe it's not a good analogy, but Im in china, and it's painful after paying money for a VPN to bypass censorship to find myself routinely blocked by CDNs because they decided I'm not human. I'm honestly feeling more opressed by these middlemen than the government sometimes. For example, maybe I can't log in to a game due to being blocked by the login API, and the game company just responds by telling me to run an antivirus scanner and try again since they are not personally developing that system that lack awareness. Such people with genuine need for VPNs and privacy tools are the sacrifice for this system.

They put themselves as a middle man for almost the whole Internet, collect huge usage data about everyone and block anybody who doesn't use mainstream tools:

https://news.ycombinator.com/item?id=42953508

https://news.ycombinator.com/item?id=13718752

https://news.ycombinator.com/item?id=23897705

https://news.ycombinator.com/item?id=41864632

https://news.ycombinator.com/item?id=42577076


You can add another one as a result of this article: The data you need to train AI and the data you need to build a search engine are the same data. So now they're inhibiting every new search engine that wants to compete with Google.

they always had. this post is about turning the false positives "up to 11" with impunity



valid.

...but OTOH it's their customers who want all of that and pay to get that, because the alternative is worse.

rock and a hard place.


I want to know if there is a way to design an alternative that isn't controlled by a single entity which allows gatekeeping.

Right - do I want them getting some info from me, or do I want my IP address exposed?

Besides CloudFront, which still costs money, what other option is there for semi-privacy and caching for free?


As the old addage goes: If you're not paying for it, you're the product.

Lots of nuance, but generally: pay for things you use. Servers, engineers, and research and development are not free, so someone has to pay.


Lots of services don't even let me pay if I wanted to, so I am forced to be the product. (Donating typically does not un-productify myself).

Or I pay and am still the product. Just with less in-my-face ads.


> Or I pay and am still the product. Just with less in-my-face ads.

Yes, this is enshittification. You pay for Amazon something or other, and they STILL show you ads. Horrible.


fwiw, I have been convinced to look at other options

Cloud front is pretty much free for your first TB. Fastly has a free plan.

Though why should it be for free?


Multiple people have brought that up. I pay for everything else, why not one more.

Although bunny.net won't take ANY of my credit or debit cards


bunny.net has some options

I will have to check them out I guess

Serious question: You put Cloudflare between all your domains and all your visitors without looking in to how this would affect your site's reachability? If so, that's interesting, considering that many people in this community are negatively affected by Cloudflare because they're using Linux and/or some less than mainstream browser.

You might want to read some threads on here about Cloudflare.


Where did I say all.

Most of the time I don't use them for their network, usually just DNS records for mail because their interface is nicer than namecheap and gives me basic stats.

To my understanding, they aren't blocking MX records behind captchas


You're right that you didn't say all. What you did write implied you use them for "AI protection", although you didn't say you did do that.

So if I wrote, "You would put" instead of "You put", then what? Would you be comfortable using their "AI protection" simply because it's free?


AI protection isn't a selling point for me. What I have said is I use them for DNS records, primarily for mx and txt records

So you're not using the parasite and that's your claim why it's not a parasite?

Dude, stop putting words in my mouth. I never said they weren't bad.

Some nicer people here tried the educative approach and it worked much better. I learned about Bunny. And I keep forgetting I have a few in deSec but that has a limit.

I do not understand the hostility


> I do not understand the hostility

Unfortunately I don’t think they were participating in the conversation in good faith. People can have an extreme view on _anything_…even internet / tech. They buy into a dream of 100% open source, or “open internet”, or 100% decentralized, whatever.

When this happens they may be convinced that “others” are crazy for not sharing their utopian vision. And once this point is reached, they struggle to communicate with their peers or normal people effectively. They share their strong opinions without sharing important context (how they reached those opinions), they think the topic is black and white (because they feel so strongly about the topic), or they become hostile to others that are not sharing that vision.

You are their latest victim lol. Ignore them, and carry on.


One of my favorite quotes: "As a rule, strong feelings about issues do not emerge from deep understanding." -Sloman and Fernbach

Learning how to spot this, and ignore such-minded people who argue in bad faith, has made me a lot happier and more chill in general.


I never said you did?

You said one response up that they weren't parasites by asking how they were parasites and then proceeded to claim you have no experience with their parasitic services.

I'm just pointing out your anecdote wasn't valid.


>How is Cloudflare a parasite?

>I never said they weren't bad.

>I don't understand the hostility.

It's known the community here doesn't like Cloudflare, and anyone who's been on the customer end of Cloudflare would tend to agree. In that context, if you truly are blind to seeing this, when you said, "how is Cloudflare a parasite" to a group not liking of cloudflare... ... it may land as saying something like "How is Hitler a bad guy?", which I hope is self-evident is saying he's a good guy contextually, of course you could troll it out and devil's advocate yourself that you were merely asking an innocent question.


I thought Cloudflare overall was neutral - meaning as many haters as lovers. I know the CEO frequents here as well.

When I ask how is Cloudflare a "parasite" I was being genuine. I know it was a problem for some users, but I don't think I realized how prevalent it was


> I have dozens of domains I have used with Cloudflare at one point and I haven't paid them a dime.

Maybe you haven't, but your users (primarily those using "suspicious" operating systems and browsers) certainly have – with their time spent solving captchas.


But Cloudflare have removed CAPTCHAs

Not sure if you're joking, but if you're not: Congratulations on using a very "normal/safe" OS/browser/IP.

I get captchas daily, without using any VPN and on several different IPs (work, home, mobile). The only crime I can think of is that I'm using Firefox instead of Chrome.


Since a few days ago, I've been getting Captchas hourly or more.

It's probably because I use Firefox on Linux with an ad blocker.

For my part, I've ensured we don't use Cloudflare at work.


I use firefox on linux with an ad blocker and cloudfare works fine

Using Linux is rare among the general public, but very normal among the kind of person who may find themselves working at Cloudflare or at a potential cloudflare partner/customer.

I don't really buy the argument that they're pushing more captchas to you just because of using Firefox on Linux with an ad blocker.



It must depend on something else. Firefox & Linux have always worked fine for me, I cannot remember when I last got restricted by a Cloudflare captcha.

my residental ip of years (which is not shared or cgnat) was recently flagged by cloudflare for who knows why. if you are asking, you havent seen when cloudflare thinks you are something else.

cloudflare are not the good guys because they give people free cdn and ddos protection lol



It's not much consolation to me if I'm one of the 25% still being challenged.

The world really has more than enough heuristic fraud detection systems that most people aren't even aware exist, but make life miserable for those that somehow don't fit the typical user profile. In fact, the lower the false positive rate gets, the more painful these systems usually become for each false positive.

I'm so tired of it. Sure, use ML (or "AI") to classify good and evil users for initial triaging all day long, but make sure you have a non-painful fallback.


Managed challenges are just CAPTCHA by another name.

I use a VPN and firefox and I get some extra captchas but not enough to be annoying. And you don't have to do anything more than tap the checkbox.

Meanwhile a bunch of "security" products other websites use just flat out block you if you're on a VPN. Other sites like youtube or reddit are in between where they block you unless you are logged in.

Cloudflare is the least obtrusive of the options.


No, the least obtrusive option is the one you don't even notice because it actually works (or offers a non-painful secondary flow when it doesn't).

Really? Because I'm on Debian, with Firefox, with a VPN active 24/7 and I almost never get Captchas. I do get those "checking your browser" pages often but they just stick around for maybe half a second then redirect.

you forgot /s

(the people not getting the joke, yes the new system don't make you train any image recognition dataset, but they profile the hell out anything they can get their hand on just like google captcha and then if you smell like a bot you're denied access. goodbye)


Download Brave.

Turn on Tor and browse for a week.

Now you know what “undesirables” feel like, where “undesirables” can be from a poor country, a bad IP block, outdated browsers, etc.

It sucks.


It's kind of an impossible problem though. They either save some tracking cookie to link your sessions between websites, or they have to re captcha check you on every website.

Why download brave and the use Tor.

Just use the Tor browser


Some large percentage of people fail when directed to the Tor browser; I don't know why.

This is not a good reason to suggest Brave instead of Tor Browser.

I already said in another post I am looking at Bunny, but they also don't seem to want to take my money. I've tried 3 cards. I am willing to pay for a good service, but I will be honest, I don't know many of cloudflare's competitors

Did you read his comment? He explained the issue he has with Cloudflare...

Yeah but they are a dictator, OpenAI et al are the parasites.

I use Firefox with adblocking and some fingerprinting anti-measurements and I rarely hit their challenges. Your IP reputation must be bad.

They have an addon [1] that helps you bypass Cloudflare challenges anonymously somehow, but it feels wrong to install a plugin to your browser from the ones who make your web experience worse

1: https://developers.cloudflare.com/waf/tools/privacy-pass/


> Your IP reputation must be bad.

And for an extremely large number of honest users, they cannot realistically avoid this.

I live in India. Mobile data and fibre are all through tainted CGNAT, and I encounter Cloudflare challenges all the time. The two fibre providers I know about use CGNAT, and I expect others do too. I did (with difficulty!) ask my ISP about getting a static IP address (having in mind maybe ditching my small VPS in favour of hosting from home), but they said ₹500/month, which is way above market rate for leasing IPv4 addresses, more than I pay for my entire VPS in fact, so it definitely doesn’t make things cheaper. And I’m sceptical that it’d have good reputation with Cloudflare even then. It’ll probably still be in a blacklisted range.


Why don't your ISPs just use IPv6?

I'm in a pretty similar boat except I frequently hit challenges. Especially if I use a VPN (which is more trustworthy than my ISP). Ironically, I'm using Cloudflare for DoH

I'd be surprised if Cloudflare were actually correlating DoH requests to HTTP requests following them, so I don't think that's a signal they are likely to use.

Probably not. In fact, it's probably a good sign that they are being accurate about that traffic being encrypted.

But I did find it ironic


I'm having lots of problems with fingerprinting protection on Librewolf and ungoogled-chromium. I use uBlock Origin and JShelter extensions on both. I'm always getting "your browser is out of date" despite always having the most newest versions.

Some sites like Stackexchange will work after just reloading the page. And rest of the sites usually work when I remove Javascript protection and Fingerprint detection from JShelter. Sill not all of them. So, they maybe/probably want to reliably fingerprint my browser to let me continue.

If I use crappy fingerprint protection, I'm not having problems but if I actually randomize some values then sites wont work. JShelter deterministicly randomizes some values using session identifier and eTLD+1 domain as a key to avoid breaking site functionality but apparently Cloudflare is beeing really picky. Tor browser is not having these problems but it uses different strategy to protect itself from fingerprinting and doesn't randomize values but tries to have unified values across different users making identification impossible.


LLM scrapers have dramatically been increasing the cost of hosting various small websites.

Without something being done, the data that these scrapers rely on would eventually no longer exist.


I think the correct term is, that unrestricted LLM scrapers have dramatically been increasing the cost of hosting various small websites.

Its not a issue when somebody does "ethical" scraping, with for instance, a 250ms delay between requests, and a active cache that checks specific pages (like news article links) to rescrape at 12 or 24h intervals. This type of scraping results in almost no pressure on the websites.

The issue that i have seen, is that the more unscrupulous parties, just let their scrapers go wild, constantly rescraping again and again because the cost of scraping is extreme low. A small VM can easily push 1000's of scraps per second, let alone somebody with more dedicated resources.

Actually building a "ethical" scraper involves more time, as you need to fine tune it per website. Unfortunately, this behavior is going to cost the more ethical scraper a ton, as anti-scraping efforts will increase the cost on our side.


The biggest issue for me is clearly masquerading their User-Agent strings. Regardless of whether they are slow and respectful crawlers, they should clearly identify themselves, provide a documentation URL and obey robots.txt. Without that, I have to play a frankly tiring game of cat and mouse, wasting my time and the time of my users (they have to put up with some form of captcha or PoW thing).

I've been an active lurker in the self-hosting community and I'm definitely not alone. Nearly everyone hosting public facing websites, particularly those whose form is rather juicy for LLMs, have been facing these issues. It costs more time and money to deal with this, when applying a simple User-Agent block would be much cheaper and trivial to do and maintain.

sigh


I use Cloudflare and edge caching, so it doesn’t really affect me, but the amount of LLM scraping of various static assets for apps I host is ridiculous.

We’re talking a JavaScript file of strings to respond like “login failed”, “reset your password” just over and over again. Hundreds of fetches a day, often from what appears to be the same system.


Turn on the the Cloudflare tarpit. When it detects LLM scrapers it starts generating infinite AI slop pages to feed the scrapers. Ruining their dataset and keeping them off your actual site.

From the server perspective Cloudflare is solving problems and not causing problems to other servers.

Analogy: locks for high-value items in grocery stores are annoying to customers, but other stores aren't being coerced by the locksmith to use them.


Yep this terrifies me, 100%. We’re slowly losing the open internet and the frog is being boiled slowly enough that people are very happy to defend the rising temperature.

If DDoS wasn’t a scary enough boogeyman to get people to install Cloudflare as a man-in-the-middle on all their website traffic, maybe the threat of AI scrapers will do the trick?

The thing about this slow slide is it’s always defensible. Someone can always say “but I don’t want my site to be scraped, and this service is free, or even better yet, I can set up my own toll booth and collect money! They’re wonderful!”

Trouble is, one day, at this rate, almost all internet traffic will be going through that same gate. And once they have literally everyone (and all their traffic)… well, internet access is an immense amount of power to wield and I can’t see a world in which it remains untainted by commercial and government interests forever.

And “forever” is what’s at stake, because it’ll be near impossible to recover from once 99% of the population is happy to use one of the 3 approved browsers on the 2 approved devices (latest version only). Feels like we’re already accepting that future at an increasing rate.


The Internet is not the first global network. Before the Internet, you had the global telephone network. It, too, strangulated end users, but eventually became stagnant, overpriced, and irrelevant. Super long-term, the current Internet is not immune from this. Internet standards are about getting as complicated and quirky as the old Bell stuff that was trying to make miles of buried copper the future, and if regulatory/commercial forces freeze this stuff in place, it's going to lead to stagnation eventually.

Something coming down the pike I think, for example, is that IPv4 addresses are going to get realllly expensive soon. That's going to lead to all sorts of interesting things in the Internet landscape and their applications.

I'm sure we'll probably have to spend some decades in the "approved devices and browers only" world before a next wave comes.


We need a reasonable alternative to some of what Cloudflare does that can be easily installed as a package on Linux distributions without any of the following to install it.

* curl | bash

* Docker

* Anything that smacks of cryptocurrency or other scams

Just a standard repo for Debian and RHEL derived distros. Fully open source so everyone can use it. (apt/dnf install no-bad-actors)

Until that exists, using Cloudflare is inevitable.

It needs to be able to at least:

* provide some basic security (something to check for sql injection, etc)

* rate limiting

* User agent blocking

* IP address and ASN blocking

Make it easy to set up with sensible defaults and a way to subscribe to blocklists.


I make this: https://anubis.techaro.lol. I have yet to add the SQL injection or IP list layers, but I can add that to the roadmap.

Primary reason people use cloudflare is to hide the ip address of their own server. So they are less likely to be hacked.

Most people are not worried about DDos as their is no reason for any one to DDos them.

Until other services start offering the same, Cloudflare remains default.


The proof of work stuff feels so cryptocurrency adjacent that I've been looking at other tools for my own thing, but I've seen Anubis on other websites and it seems to do a good job.

There's a non proof of work challenge: https://anubis.techaro.lol/docs/admin/configuration/challeng...

Also: Anubis does not mine cryptocurrency. Proof of work is easy to validate on the server and economically scales poorly in the wild for abusive scrapers.


Thanks for the link. I’ll have a look.

I’m glad there’s no cryptocurrency involved (was never a concern) but I worry about the optics of something so closely associated.

(I appreciate your commenting on this. I know the project recently blew up in popularity. Keep up the great work)


If you have suggestions for JS based challenges that don't become a case of "read the source code to figure out how to make playwright lie", I'm all ears for the ideas :)

This unsubstantiated anti-cryptocurrency bias on HN is quite disappointing. Did you hear about filecoin, which allows to buy and sell disk space independently on large companies? Why wouldn't an anonymous cryptocurrency like Monero help with this real problem? What would the downsides be?

I remember using mod_security with Apache long ago for some of this, looks like it's still around and now also supports Nginx and IIS: https://modsecurity.org/

Thank you. This doesn't have everything I'm looking for, but apparently it has been packaged in Debian at least. I don't know why the website doesn't mention this.

it's called not having a vibecoded app that falls to pieces on public endpoints even before ngix ratelimit can kick in

Nobody is talking about a vibe coded app. I want to block AI scrapers entirely.

point is, why do you care if your site can handle the traffic?

there's no (malicious) bot detection that won't impact a portion of real users. accept that fact and just let it be.

poisoning data in ways that's obvious to the false positive user is a much better option.


I really doubt any legit user is using a weird user agent and an IP address in the same AS as an AI slop crawler

You'd be surprised. Your users too, but you wouldn't know because they will not be able to tell you.

Correction: extract monstreous profits. When I read about the revenues associated with Reddit AI deals, I can't even imagine what could possibly be deals that cover half of the internet. Cynically speaking, it's a genious level move.

Yep, it's really annoying.

I'm using Firefox with a normal adblocker (uBlock Origin).

I get hit with a Cloudflare captcha often and that page itself takes a few seconds before I can even click the checkbox. It's probably an extra 6-7 seconds and it happens quite a few times a day.

It's like calling into a billion dollar company and it taking 4 minutes to reach a human because you're forced through an automated system where you need to choose 9 things before you even have a chance to reach a human. Of course it rattles through a bunch of non-skippable stuff that isn't related to your issue for the first minute, like how much the company is there to offer excellent customer support and how much they value you.

It's not about the 8 seconds or 4 minutes. It's the feeling that you're getting put into really poor experiences from companies with near-unlimited resources with no control over the situation while you slowly watch everything get worse over time.

The Cloudflare situation is worse because you have no options as an end user. If a site uses it, your only option is to stop using the site and that might not be an option if they are providing you an important service you depend on.

Secondly they now have a complete profile over your browsing history for any site that has CF enabled and there's not much you can do here except stop using 20% or whatever market share of the internet they have, and also do a DNS lookup for every domain you visit from an anonymous machine to see if it's a Cloudflare IP range.

In case you didn't know, CF offers a partial CNAME / DNS feature where your primary DNS can be hosted anywhere and then you can proxy traffic from CF to your back-end on a per domain / sub-domain level. Basically you can't just check a site's DNS provider to see if they are on CF. You would have to check each domain and sub-domain to see if it resolves to a CF IP range which is documented here: https://www.cloudflare.com/ips-v4/# and https://www.cloudflare.com/ips-v6/#


If your on ipv6, I think they have to for ipv6 addresses… there’s just way too many bots and way too many addresses to feasibly do anything more precise.

If your on ipv4 you should check whether your behind a NAT otherwise you may have gotten an address that was previously used by a bot network.


> I think they have to for ipv6 addresses… there’s just way too many bots and way too many addresses

Are you really arguing that it's legitimate to consider all IPv6 browsing traffic "suspicious"?

If anything, I'd say that IPv4 is probably harder, given that NATs can hide hundreds or thousands of users behind a single IPv4 address, some of which might be malicious.

> you may have gotten an address that was previously used by a bot network.

Great, another "credit score" to worry about...


For a whitelist system, then by definition yes?

If it’s a blacklist system, like I said I’ve not heard of any feasible solution more precise than banning huge ranges of ipv6 addresses.


> For a whitelist system, then by definition yes?

A whitelist system would consider all IPv4 traffic suspicious by default too. This is not an answer to why you'd be suspicious of IPv6 in particular.

> I’ve not heard of any feasible solution more precise than banning huge ranges of ipv6 addresses.

Handling /56s or something like that is about the same as handling individual IPv4 addresses.


I try to build things to be INET6 ready, and just repeat /64s like a single host. Eventually this will probably have to broadened to /56s or /48s.

> A whitelist system would consider all IPv4 traffic suspicious by default too.

Based on what argument…?


The definition of whitelisting. The argument you brought up.

No…? Someone can clearly implement a whitelist system that applies only to ipv6… but that makes no judgement on ipv4.

Let's back up a step. You said by definition a whitelist system would consider every IPv6 suspicious (until it's put on the list, presumably). What is that definition?

If "applies only to IPv6" is an optional decision someone could make, then it's not part of the definition of a whitelist system for IPs, right?


What are you talking about?

The prior comment was responding directly to your comment, not any comment preceding that.

Of course it’s no longer by definition if you expand the scope beyond an ipv6 whitelist as there are an infinite number of possible whitelists.


> What are you talking about?

The first comment with the word "whitelist". Before I entered the conversation. This comment: https://news.ycombinator.com/item?id=44449821

lxgr was challenging the idea that you would treat all IPv6 traffic as suspicious.

You justified it by saying that "by definition" "a whitelist system" would do that.

I want your definition of "a whitelist system". Not one of the infinite possible definitions, the one you were using right then while you wrote that comment.

> if you expand the scope beyond an ipv6 whitelist

Your comment before that was talking about IP filtering in general, both v4 and v6!

And then lxgr's comment was about both v4 and v6.

So when you said "a whitelist system" I assumed you were talking about IP whitelists in general.

If you weren't, if you jumped specifically to "IPv6 whitelist", you didn't answer the question they were asking. What is the justification to treat all IPv6 as suspicious? Why are we using the definition of 'IPv6 whitelist' in the first place?


None of this even makes sense.

Why does your opinion on how a comment should be interpreted, matter more than anyone else’s opinion in the first place?


I didn't say that. Huh?

I'm inviting you to tell me how to interpret it. In fact I'm nearly begging you to explain your original comment more. I'm not telling anyone how to interpret it.

I have criticisms for what was said, but that comes after (attempted) interpretation and builds on top of it. I'm not telling anyone how to interpret any post I didn't make.

Edit: In particular, my previous comment has "I assumed" to explain my previous posts, an it has an "If" about what you meant. Neither one of those is telling anyone how to interpret you.


This is even closer to gibberish… what are you even trying to say?

You don't understand a word I'm saying, and you have missed/declined every single time I asked you to explain the first comment I responded to.

Let's just mutually give up on this conversation.


Okay then.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: