Sustaining is used in Engineering to mean that it's now post-GA and there is no further development. The platform is not End of Life but there are no more features planned.
They meant what they wrote. Merriam-Webster's definition: "to support the weight of"
It means they're transitioning to the absolute minimum to keep it alive and nothing more. That could, in worst cases, mean firing everyone except one guy, or using AI to keep it alive.
I recall comments about this last week on the BBC website where people made the points that:
1. Surely the long term plan is to not keep these relics in a gargantuan warehouse but instead to put them in a museum(s) — with free entry no less — so that the tax paying public can enjoy them.
2. Further, collections of relics that relate to the site of each station on the line could be displayed in each.
The tax paying public aren't going to pay for that.
The existing collections can just about barely justify free entry. Most museums have a vast secondary collection that's not on display already. These items are going in a warehouse because there isn't enough money to do archaeology on them any time soon, let alone prep them for display.
Second that, it's really good. You can only really see a small fraction of it still, just because of the nature of it (it's like a central viewing space completely surrounded by warehouse shelving) but really interesting, from the meta perspective of seeing how they store and tend to pieces too.
For example, there's a bunch of swords 'on display' (such as it is) and then you can sort of just about see an entire sword storage/curation room off to one side, with many times more than are actually visible in some detail.
Science Museum opens its warehouse in Swindon to the public too
Highly recommended for people with an interest in vehicles, but there's a lot of other stuff from twentieth century consumer goods to the contents of Stephen Hawkings office on shelves there and document archives too.
Out of 450 000 pieces I bet 440 000 pieces are just pottery shards and other ”boring” things. Important for history etc but no one wants to go to a museum with 400 000 almost identical pieces of pottery shards and similar. Only a tiny amount will be things the public wanna see in a museum.
So true. Folks used pots for tens of thousands of years, and used them mostly like disposable dinnerware. They broke, daily, and got tossed out the window. A settlement of a dozen roundhouses might have a million sherds, depending on how long it persisted.
1. The permanent collections of just about all museums in the UK are free so if they go to a museum they will be free to see (after an initial exhibition if they were to host that)
2. This is not uncommon for things like Roman ruins in the UK. For example, near the Tower of London, there is a glass window in a random pedestrian underpass where you can see part of the original Roman wall around London, or in Cirencester and St Albans there are big parks where you can see all the Roman ruins. Where relics are smaller or more valuable, something like a railway station isn't really set up to keep them secure and on display so they would sometimes show casts or photographs of items, and have the original in an actual dedicated exhibition in a museum. For example if you go to Orkney you can see some viking relics in situ (eg the "viking grafitti" runes on the stones in maes howe) and some (like the scar boat burial) you need to go to an actual museum to see.
"Do not rely too much on your own judgment. [...] if you are an expert user of LLMs and you tag 10 pages as being AI-generated, you've probably falsely accused one editor."
Never accuse people of LLM writing based on short comments, your false positive rate is invariably going to be way too high to be acceptable given the very limited material.
It's just not worth it: Even if you correctly accuse 9/10 times, you are being toxic to that false positive case for basically no gain.
I really don't think this should be a registry-level issue. As in, the friction shouldn't be introduced into _publishing_ workflows, it should be introduced into _subscription_ workflows where there is an easy fix. Just stop supporting auto-update (through wildcard patch or minor versions) by default... Make the default behaviour to install whatever version you load at install time (like `npm ci` does)
No, it does need help at the publishing side. Most places I know are not updating by default. Everything has a lock file. But the nx attack a few months ago happened because the VS Code extension for nx was always ran the @latest version to check for updates or something like that.
So yeah… people will always have these workflows which are either stupid or don’t have an easy way to use a lock file. So I’d sure as hell like npm to also take some steps to secure things better.
As far as I know, using a lock file with npm install is both the default behavior and also doesn’t randomly updates things unless you ask it to… though it’s definitely best practice to pin dependencies too
We’re heading to sleep as we’re on London time! Thanks to everyone who commented & for providing feedback. It’s been immensely useful to hear everyone’s perspectives.
Please reach out to us if you would like to at product@kenobi.ai
You shouldn't be(!) We have a very high bar set up to be met to be considered bot traffic. Sometimes we are at the behest of the model providers and there's way more latency + even timeouts under higher load.
We get asked this a fair amount and the way we’re strategising on it is to build more opportunities for the site owners to define context as part of the broad site research that goes into creating the interpolations.
If I was to do this, I'd decide on what audiences my site was targeting and ensure the landing page had pre-approved content for each of them. Then I'd only use the LLM to rearrange the pre-approved marketing content, such that it puts the content that it thinks best targets the visitor above the fold. This way, the worst the LLM can do is to order the content incorrectly, and the visitor would need to scroll to see the content that targets them.
Even better, the LLM can make up rules for matching traffic to targeted profiles (corp IPs shows enterprise content, gov IPs shows gov offering, EU IPs shows European hosting options, etc). This way you don't use an LLM while rendering the page, reducing cost and speeding up page load times.
If you would be willing to give us another chance please email product@kenobi.ai with the site you used and we can get the research context that it generated fixed! (This has been a bit of an issue for some users, when the research gathering agent goes down the wrong track)
We think that in commercial buying there will still be a place for “discovery”, where B2B visitors gain from being able to independently digest public facing materials themselves. And that this will mean the adoption of agentic browsing is slower than people think.
However, we did already start experimenting with the agentic browsers like Atlas and Strawberry — I built a PoC for the former. But this is still very much experimental!
Edit:
Just wanted to add that your question is a prescient one and it is something we get asked a lot by investors, VCs etc, but hardly ever by people who run businesses with websites, or the people who visit them / do commercial buying.
reply