I turned this on and it adjusts the robots.txt automatically; not sure what else it is doing.
# NOTICE: The collection of content and other data on this
# site through automated means, including any device, tool,
# or process designed to data mine or scrape content, is
# prohibited except (1) for the purpose of search engine indexing or
# artificial intelligence retrieval augmented generation or (2) with express
# written permission from this site’s operator.
# To request permission to license our intellectual
# property and/or other materials, please contact this
# site’s operator directly.
CCBot was already in so many robots.txt prior to this
How is CC supposed to know or control how people use the archive contents
What if CC is relying on fair use
# To request permission to license our intellectual
# property andd/or other materials, please contact this
# site's operator directly
If the operator has no intellectual property rights in the material, then do they need permission from the rights holders to license such materials for use in creating LLMs and collect licensing fees
Is it common for website terms and conditions to permit site operators to sublicense other peoples' ("users") work for use in creating LLMs for a fee
Read a tos and notice that you give the site operators unlimited license to reproduce or spread your works, almost on any site. it's required to host and show the content essentially
This is interesting. The reasoning and response don't line up.
> Cloudflare is making the change to protect original content on the internet, Mr. Prince said. If A.I. companies freely use data from various websites without permission or payment, people will be discouraged from creating new digital content, he said
> prohibited except for the purpose of [..] artificial intelligence retrieval augmented generation
This seems to be targeted at taxing training of language models, but why an exclusion for the RAG stuff? That seems like it has a much greater immediate impact for online content creators, for whom the bots are obviating a click.
With that opinion, are you also suggesting that we ban ad blockers? Because it's better I not click & consume resources than click and not be served ads, basically just costing the host money.
It means sense to allow for RAG in the same way that search engines provide a snippet of an important chunk of the page.
A blog author could not complain that their blog is getting ragged when they're extremely liable to be Google/whatever searching all day and basically consuming others' content in exactly the same way that they're trying to disparage.
What I want to know is if the flood of scraping everyone has been complaining about is coming from people trying to scrape for training or bots doing RAG search.
I get that everyone wants data, but presumably the big players already scraped the web. Do they really need to do it again? Or is it bit players reproducing data that's likely already in the training set? Or is it really that valuable to have your own scraped copy of internet scale data?
I feel like I'm missing something here. My expectation is that RAG traffic is going to be orders of magnitude higher than scraping for training. Not that it would be easy to measure from the outside.
I believe it's both. We're at a place where legislation hasn't really declared what is and isn't allowed. These scrapers are acting like Googlebot or any other search engine crawler, and trying to find any kind of new content that might be of value to their users.
New data is still being added online daily (probably hourly, if not more often) by humans, and the first ones to gain access could be the "winners," particularly if their users happen to need up to date data (and the service happens to have scraped it). Just like with search engines/crawlers, there's also the big players that may respect your website, but there are also those that don't use rate-limiting or respect robots.txt.
You should ask Zuck, since, for what we've seen and what we were ask to act against, Meta is the main culprit in scraping every single page of websites, multiple times a day.
And I'm talking about ecommerce websites, with their bot scraping every variation of each product, multiple times a day.
I don't think we should ban ad blockers, but I also think it's fair to suggest that the loss of organic traffic could be affecting the incentive to create new digital content, at least as much as the fear of having your content absorbed into an LLM's training data.
IMO the backlash against LLMs is more philosophical, a lot of people don’t like them or the idea of one learning from their content. Unless your website has some unique niche information unavailable anywhere else there’s no direct personal risk. RAG would be a more direct threat if anything.
It's really about who is getting the value from the work of the content. If content creators of all sorts have their work consumed by LLMs, and LLM orgs charge for it can capture all the value, why should people create to have their work vacuumed up for the robot's benefit? For exposure? You can't eat or pay rent with exposure. Humans must get paid, and LLMs (foundational models and output using RAG) cannot improve without a stream of works and data humans create.
Whether you call it training or something else is irrelevant, it's really exploitation of human work and effort for AI shareholder returns and tech worker comp (if those who create aren't compensated). And the technocracy has not been, based on the evidence, great stewards of the power they obtain through this. Pay the humans for their work.
AI scrapers increase traffic by maybe 10x (this varies per site) but provide no real value whatsoever to anyone. If you look at various forms of "value":
* Saying "this uses AI" might make numbers go up on the stock market if you manage to persuade people it will make numbers go up (see also: the market will remain irrational longer than you can remain solvent).
* Saying "this uses AI" might fulfill some corporate mandate.
* Asking AI to solve a problem (for which you would actually use the solution) allows you to "launder" the copyright of whatever source it is paraphrasing (it's well established that LLMs fail entirely if a question isn't found within their training set). Pirating it directly provides the same value, with significantly less errors/handholding.
* Asking AI to entertain you ... well, there's the novelty factor I guess, but even if people refuse to train themselves out of that obsession, the world is still far too full of options for any individual to explore them all. Even just the question of "what kind of ways can I throw some kind of ball around" has more answers than probably anyone here knows.
Because it's injected into a previously working product - even if it makes it worse - and automatically injects its ideas. That counts as "somebody using it".
Because it's bundled with other products that do provide value, and that counts as "someone using it".
Because some middle manager has declared that I must add AI to my workflow ... somehow. Whatever, if they want to pay me to accomplish less than usual, that's not my problem.
Because it's a cool new toy to play around with a bit.
Because surely all these people saying "AI is useful now" aren't just lying shills, so we'd better investigate their claims again ... nope, still terminally broken.
I wonder… Google scrapes for indexing and for AI, right? I wonder if they will eventually say: ok, you can have me or not, if you don’t want to help train my AI you won’t get my searches either. That’s a tough deal but it is sort of self-consistent.
Very few people seems to be complaining that Google crashes their sites. Google also publish their crawlers IP ranges, but you really don't need to rate-limit Google, they know how to back off and not overload sites.
Curious if the content on those sites might have high value to Google? Such as if they have data that is new or unavailable elsewhere, or if they're just standard sites, and you've just been unlucky?
I have had odd bot behavior from some major crawlers, but never from Google. I wonder if there is a correlation to usefulness of content, or if certain sites get stuck in a software bug (or some other strange behavior).
Google do value the sites, they have data unavailable elsewhere. At some point we had an automated message saying the site had too many pages and would no longer be indexed, then a human message saying that was a mistake, and our site was an exception to that rule.
But as with any contact with these large companies, our contact eventually disappeared.
"Embrace, Extend, Extinguish" Google's mantra. And yes, I know about Microsoft's history with that phrase ;) But Google has done this with email, browsers (Google has web apps that run fine on Firefox but request you use Chrome), Linux (Android), and I'm sure there's others I am forgetting about.
For my silly hobby sites I just return status 444 close the connection for anything that has case-insentive "bot" in the UA requesting anything other than robots.txt, humans.txt, favicon.ico, etc... This would also drop search engines but I blackhole route most of their CIDR blocks. I'm probably the only one here that would do this.
That's at least a more reasonable default than that I've seen at least one newspaper do, which is to block both LLM scrapers and things like ChatGPT's search feature explicitly.
# NOTICE: The collection of content and other data on this # site through automated means, including any device, tool, # or process designed to data mine or scrape content, is # prohibited except (1) for the purpose of search engine indexing or # artificial intelligence retrieval augmented generation or (2) with express # written permission from this site’s operator.
# To request permission to license our intellectual # property and/or other materials, please contact this # site’s operator directly.
# BEGIN Cloudflare Managed content
User-agent: Amazonbot Disallow: /
User-agent: Applebot-Extended Disallow: /
User-agent: Bytespider Disallow: /
User-agent: CCBot Disallow: /
User-agent: ClaudeBot Disallow: /
User-agent: Google-Extended Disallow: /
User-agent: GPTBot Disallow: /
User-agent: meta-externalagent Disallow: /
# END Cloudflare Managed Content User-agent: * Disallow: /* Allow: /$