Chuckling at the disclaimer 'No AI made by a human.' I doubt many web devs could tell you that because so many use AI now. I was speaking with a web dev this summer and he told me AI made him at least twice as productive. It's an arms race to the bottom imo.
Which begs the question, are people consciously measuring their productivity? If so, how? And did they do it the same way before and after using AI tooling?
Anecdotal, but I don't measure my productivity, because it's immeasurable. I don't want to be reduced to lines of code produced or JIRA tickets completed. We don't even measure velocity, for that matter. Plus when I do end up with a task that involves writing something, my productivity depends entirely on focus, energy levels and motivation.
I tried Github CoPilot for about a year, I may try Claude for a bit sooner or later.
It felt like it got in the way about half the time. The only place I really liked it was for boilerplate SQL code... when I was generating schema migration files, it did pretty good at a few things based on what I was writing. Outside that, I don't feel like it helped me much.
For the Google search results stuff, Gemini, I guess... It's hit or miss... sometimes you'll get a function or few things that look like they should work, but no references to the libraries you need to install/add and even then may contain errors.
I watched a friend who is really good with the vibe coding thing, but it just seemed like a frustrating exercise in feeding it the errors/mistakes and telling it to fix them. It's like having a brilliant 10yo with ADD for a jr developer..
One of the only studied made so far showed lower actual productivity despite higher self reported productivity. That study was quite limited but I would take self reported productivity with a huge grain of salt.
> he told me AI made him at least twice as productive.
He’s not only lying to you, he’s also lying to himself.
Recent 12-month studies show that less than 2% of AI users saw an increase in work velocity, and those were only the very top-skilled workers. Projection also indicated that of the other 98%, over 90% of them will never work faster with AI than without, no matter how long they work with AI.
TL;DR: the vast majority of people will only ever be slower with AI, not faster.
> This is actually a clever way to distinguish if the browser supports XSLT or not. Actual content is XHTML in https://xslt.rip/index.xsl
I agree it is a clever way. But it also shows exactly how hard it is to use XML and XSLT in a "proper way": Formal everything is fine to do it in this way (except the server is sending 'content-type: application/xml' for the /index.xsl, it should be 'application/xslt+xml').
Almost all implementations in XML and XSLT that I have seen in my career showed a nearly complete lack of understanding of how they were intended to be used and how they should work together. Starting with completely pointless key/value XMLs (I'm looking at you, Apple and Nokia), through call-template orgies (IBM), to ‘yet-another-element-open/-close’ implementations (almost every in-house application development in PHP, JAVA or .NET).
I started using XSLT before the first specification had been published. Initially, I only used it in the browser. Years later, I was able to use XSLT to create XSDs and modify them at runtime.
To me XSLT came with a flood of web complexity that led to having effectively only 2 possible web browsers. It seems a bit funny because the website looks like straight out of the 90s when "everything was better"
It was not rendering that killed other browsers. Rendering isn't the hard part. Getting most of rendering working gets you about 99% of the internet working.
The hard part, the thing that killed alternative browsers, was javascript.
React came out in 2012, and everyone was already knee-deep in earlier generation javascript frameworks by then. Shortly after, Google would release the V8 engine which was able to bring the sluggish web back to some sense of usable. Similarly, Mozilla had to spend that decade engineering their javascript engine to claw itself out of the "Firefox is slow" gutter that people insisted.
Which is funny because if you had adblock, I'm not convinced firefox was ever slow.
A modern web browser doesn't JUST need to deal with rendering complexity, which is manageable and doable.
A modern web browser has to do that AND spin up a JIT compiler engineering team to rival Google or Java's best. There's also no partial credit, as javascript is used for everything.
You can radically screw up rendering a page and it will probably still be somewhat usable to a person. If you get something wrong about javascript, the user cannot interact with most of the internet. If you get it 100% right and it's just kind of slow, it is "unusable".
Third party web browsers were still around when HTML5 was just an idea. They died when React was a necessity.
Conveniently, all three of the major JS engines can be extracted from the browsers they are developed for, and used in other projects. Node and Bun famously use V8 and the WebKit one, and Servo I believe embeds SpiderMonkey.
If you want to start a new browser project, and you're not interested in writing a JS engine from scratch, there are three off-the-shelf options there to choose from.
It's far from dead, though. XML is deeply ingrained in many industries and stacks, and will remain so for decades to come, probably until something better than JSON comes along.
There was fresh COBOL code written up until early 1990s too, long past its heyday.
Thing is you couldn't swing a dead cat in 00s without hitting XML. Nearly every job opening had XML listed in requirements. But since mid-2010s you can live your entire career without the need to work on anything XML-related.
But it’s still there and needs to be supported by the OS and tooling. Wether you edit it manually isn’t relevant (and as counterpoint, I do it all the time, for both apps and launchd agents).
There's still epub and tons of other standards built on xml and xhtml. Ironically, the last epub file I downloaded, a comic book from humble bundle, had a 16mb css file composed entirely of duplicate definitions of the same two styles, and none of it was needed at all (set each page and image to the size of the image itself, basically)
On the web. I, among other things, make Android apps, and Android and XML are one and the same. There is no such thing as Android development without touching XML files.
I did Android Developer Challenge back in 2008 and honestly don't remember doing that much of XML. Although it is the technology from peak XML days so perhaps you're right.
It has, I think, one nice feature that few markups I use these days have: every node is strongly-typed, which makes things like XSLT much cleaner to implement (you can tell what the intended semantics of a thing is so you aren't left guessing or hacking it in with __metadata fields).
... but the legibility and hand-maintainability was colossally painful. Having to tag-match the closing tags even though the language semantics required that the next closing tag close the current context was an awful, awful amount of (on the keyboard) typing.
I have the same mixed feelings. Complexity is antidemocratic in a sense. The more complex a spec gets the fewer implementations you get and the more easily it can be controlled by a small number of players.
It’s the extend part of embrace, extend, extinguish. The extinguish part comes when smaller and independent players can’t keep up with the extend part.
A more direct way of saying it is: adopt, add complexity cost overhead, shake out competition.
Ironically, that text is all you get if you load the site from a text browser (Lynx etc.) It doesn't feel too different from <noscript>This website requires JavaScript</noscript>...
I now wonder if XSLT is implemented by any browser that isn't controlled by Google (or derived from one that is).
> now wonder if XSLT is implemented by any browser that isn't controlled by Google (or derived from one that is).
Edge IE 11 mode is still there for you. Which also supports IE 6+ like it always did, presumably. They didn’t reimplement IE in Edge; IE is still there. Microsoft was all in on xml technologies back in the day.
I should've worded differently. By the narrative of this website, Google is "paying" Mozilla & Apple to remove XSLT, thus they are "controlled" by Google.
I personally don't quite believe it's all that black and white, just wanted to point out that the "open web" argument is questionable even if you accept this premise.
I suspect that it wouldn't actually be that difficult to add XSLT support to a textmode browser, given that XSLT libraries exist and that XSLT in the browser is a straightforward application of it. They just haven't bothered with it.
I'm strongly against the removal of XSLT support from browsers—I use both the JavaScript "XSLTProcessor" functions [0] and "<?xml-stylesheet …?>" [1] on my personal website, I commented on the original GitHub thread [2], and I use XSLT for non-web purposes [3].
But I think that this website is being hyperbolic: I believe that Google's stated security/maintenance justifications are genuine (but wildly misguided), and I certainly don't believe that Google is paying Mozilla/Apple to drop XSLT support. I'm all in favour of trying to preserve XSLT support, but a page like this is more likely to annoy the decision-makers than to convince them to not remove XSLT support.
Small, sure, but not elite. xml-stylesheet is by far the easiest way to make a simple templated website full of static pages. You almost could not make it any simpler.
WebExtensions still have them? I thought the move to HTML (for better or worse) would've killed that. Even install.rdf got replaced IIRC so there shouldn't be much traces of XML in the new extensions system...
Can’t you just do the xslt transformation server-side? Then you can use the newest and best xslt tools, and the output will work in any browser, even browsers that never had any built-in xslt support.
> Cant you just do the xslt transformation server-side?
For my Atom feed, sure. I'm already special-casing browsers for my Atom feed [0], so it wouldn't really be too difficult to modify that to just return HTML instead. And as others mentioned, you can style RSS/Atom directly with CSS [1].
For my Stardew Valley Item Finder web app, no. I specifically designed that web app to work offline (as an installable PWA), so anything server-side won't work. I'll probably end up adding the JS/wasm polyfill [2] to that when Chrome finally removes support, but the web app previously had zero dependencies, so I'm a little bit annoyed that I'll have to add a 2MB dependency.
That is actually mozilla's stand in the linked issue except it's on client though. They would rather replace it with some non native replacement (So there is no surprising security issue anymore) if remove directly is impractical.
There is actually a example of such situation. Mozilla removed adobe pdf plugin support a long time ago and replaced it with pdf.js. It's still a slight performance regression for very giant pdf. But it is enough for most use case.
But the bottom line is "it's actually worth to do it because people are using it". They won't actively support a feature that little people use because they don't have the people to support it.
Huh? How would a static site generator serve both RSS and the HTML view of the RSS from the same file?
To be extra clear: I want to have <a href="feed.xml">My RSS Feed</a> link on my blog so everyone can find my feed. I also want users who don't know about RSS to see something other than a wall of plain-text XML.
You don't serve them from the same file. You serve them from separate files.
As I mention in my other comment to you, I don't know why you want an RSS file to be viewable. That's not an expected behavior. RSS is for aggregators to consume, not for viewing.
Technically, the web server can do content negotiation based on Accept headers with static files. But… In theory, you shouldn't need a direct link to the RSS feed on your web page. Most feed readers support a link-alternate in the HTML header:
Someone who wants to subscribe can just drop example.com/blog in to the feed reader and it will do the right thing. The "RSS Feed" interactive link then could go to a HTML web page with instructions for subscribing and/or a preview.
I think also literally, independent of the cheeky tone.
Where it lost me was:
>RSS is used to syndicate NEWS and by killing it Google can control the media. XSLT is used worldwide by multiple government sites. Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?
I mean yes Google lobbies, and certainly can lobby for bad things. And though I personally didn't know much of anything about XSLT, I from reading a bit about it I certainly am ready to accept the premise that we want it. But... is Google lobbying for an XSLT law? Does "control legislation" mean deprecate a tool for publishing info on government sites?
I actually love the cheeky style overall, would say it's a brilliant signature style to get attention, but I think this implying this is tied to a campaign to control laws is rhetorical overreach even by its own intentionally cheeky standards.
I think the reason you're considering it rhetorical overreach is because you're taking it seriously. If the author doesn't actually mind the removal of XSLT support (i.e. possibly rues its removal, but understands and accepts the reasons), then it's really a perfectly fine way to just be funny.
Right, my quote and your clarification are saying the same thing (at least that's what I had in mind when I wrote that).
But that leaves us back where we started because characterizing that as "control the laws" is an instance of the the rhetorical overreach I'm talking about, strongly implying something like literal control over the policy making process.
Laws that are designed to help you but you can't easily access, or laws that are designed to control/restrict you and that get shoved in your face: once you manage "consumption" of laws, you can push your agenda too.
I agree that you would have to believe something like that to make sense of what it's implying. But by the same token, that very contention is so implausible that that's what makes it rhetorical overreach.
It would be ridiculous to suggest that anyone's access to published legislation would be threatened by its deprecation.
This is probably the part where someone goes "aha, exactly! That's why it's okay to be deprecated!" Okay, but the point was supposed to be what would a proponent of XSLT mean by this that wouldn't count as them engaging in rhetorical overreach. Something that makes the case against themselves ain't it.
It's hard enough telling them to also get off Instagram and Whatsapp and switch to Signal to maintain privacy. I'm going to have a hard time explaining what XSLT is!
> but a page like this is more likely to annoy the decision-makers than to convince them to not remove XSLT support.
You cannot “convince decision-makers” with a webpage anyway. The goal of this one is to raise awareness on the topic, which is pretty much the only thing you can do with a mere webpage.
For some reason people seem to think raising awareness is all you need to do. That only works if people already generally agree with you on the issue. Want to save endangered animals? raising awareness is great. However if you're on an issue where people are generally aware but unconvinced, raising more awareness does not help. Having better arguments might.
>For some reason people seem to think raising awareness is all you need to do.
I guess I'm not seeing how that follows. It can still be complimentary to the overall goal rather than a failure to understand the necessity of persuasion. I think the needed alchemy is a serving of both, and I think it actually is trying to persuade at least to some degree.
I take your point with endangered animal awareness as a case of a cause where more awareness leads to diminishing returns. But if anything that serves to emphasize how XSLT is, by contrast, not anywhere near "save the animals" level of oversaturation. Because save the animals (in some variation) is on the bumper sticker of at least one car in any grocery store parking lot, and I don't think XSLT is close to that.
I think it's the other way around. Simply raising awareness about endangered animals may be enough to gain traction since many/most people are naturally sympathetic about it. Conversely, XSLT being deprecated has lower awareness initially, but when you raise it many people hearing that aren't necessarily sympathetic - I don't think most engineers think particularly fondly about XSLT, my reaction to it being deprecated is basically "good riddance, I didn't think anyone was really using it in browsers anyway".
As an open source developer, i also have a lot of sympathy to google in this situation. Having a legacy feature holding the entire project back despite almost nobody using it because the tiny fracation that do are very vocal and think its fine to be abusive to developers to get what they want despite the fact its free software they didn't pay a dime for, is something i think a lot of open source devs can sympathize with.
I think all that you say applies to a random open source project done by volunteer developers, but really doesn't in case of Google.
Google has used its weight to build a technically better product, won the market, and are now driving the whole web platform forward the way they like it.
This has nothing to do with the cost of maintaining the browser for them.
It seems likely to me that it is about the 'cost' - not literally monetary cost but one or two engineers periodically have to wrangle libxslt for Chrome and they think it's a pain in the ass and not widely used, and are now responding by saying "What if I didn't have to deal with this any more".
I'm not sure what else it would be about - I don't see why they would especially care about removing XSLT support if cost isn't a factor.
Google is still made up of people, who work a finite amount of hours in a day, and maybe have other things they want to spend their time on then maintaining legacy cruft.
There is this weird idea that wealthy people & corporations arent like the rest of us, and no rules apply to them. And to a certain extent its true that things are different if you have that type of wealth. But at the end of the day, everyone is still human, and the same restrictions still generally apply. At most they are just pushed a little further out.
My comment is not about that at all: it's a response to claim how Google SW engineering team is feeling the heat just like any other free software project, and thus we should be sympathetic to them?
I am sure they've got good reasons they want to do this: them having the same problems as an unstaffed open source project getting vocal user requests is not one of them.
>I think it's the other way around. Simply raising awareness about endangered animals may be enough to gain traction since many/most people are naturally sympathetic about it.
You're completely right in your literal point quoted above, but note what I was emphasizing. In this example, "save the animals" was offered as an example of a problem oversaturated in awareness to a point of diminishing returns. If you don't think animal welfare illustrates that particular idea, insert whatever your preferred example is. Free tibet, stop diamond trade, don't eat too much sodium, Nico Harrison shouldn't be a GM in NBA basketball, etc.
I think everyone on all sides agrees with these messages and agrees that there's value in broadcasting them up to a point, but then it becomes not an issue of awareness but willpower of relevant actors.
You also may well be right that developers would react negatively, honestly I'm not sure. But the point here was supposed to be that this pages author wasn't making the mistake of strategic misunderstanding on the point of oversaturating an audience with a message. Though perhaps they made the mistake in thinking they would reach a sympathetic audience.
> For some reason people seem to think raising awareness is all you need to do.
I don't think many do.
It's just that raising awareness is the first step (and likely the only one you'll ever see anyway, because for most topics you aren't in a position where convincing *you* in particular has any impact).
Sure, but translating that movement to actual policy change usually depends on how much uninvolved people are sympathetic to the protestors, which usually involves how rational the protestors are precieved as. Decision makers are affected by public sentiment, but public sentiment of the uninvolved public generally carries more weight.
Thats why the other side usually try to smear protests as being crazy mobs who would never be happy. The moment you convince uninvolved people of this, the protestors lose most power.
> Rational arguments come later, and mostly behind closed doors.
I disagree with this. Rational arguments behind closed doors happen before resorting to protest not after. If you're resorting to protest you are trying to leverage public support into a more powerful position. That's about how much power you have not the soundness of your argument.
> Sure, but translating that movement to actual policy change usually depends on how much uninvolved people are sympathetic to the protestors
No, that's the exception rather than the rule. That's a convenient thing to teach to the general public and that's why people like MLK Jr. and Gandhi are being celebrated, but most movement that make actual policy changes do so while disregarding bystanders entirely (or even actively hurting bystanders. That's why terrorism, very unfortunately, is effective in practice).
> which usually involves how rational the protestors are precieved as
I'm afraid most people don't really care about how rational anyone is perceived at. Trump wouldn't have been elected twice if that was the case.
> Decision makers are affected by public sentiment, but public sentiment of the uninvolved public generally carries more weight.
They only care about the sentiment of the people that can cause them nuisance. A big crowd of passively annoyed people will have much less bargaining power than a mob of angry male teenagers doxxing and mailing death threats: see the gaming industry.
> I disagree with this. Rational arguments behind closed doors happen before resorting to protest not after.
Bold claim that contradicts the entire history of social conflicts…
My emotional response to XSLT being removed was: "finally!". You would need some good arguments to convince me that despite my emotions applauding this descion it is actually a bad thing.
> Last time this came up the consensus was that libxstl was barely maintained and never intended to be used in a secure context and full of bugs.
Sure, I agree with you there, but removing XSLT support entirely doesn't seem like a very good solution. The Chrome developer who proposed removing XSLT developed a browser extension that embeds libxslt [0], so my preferred solution would be to bundle that by default with the browser. This would:
1. Fix any libxslt security issues immediately, instead of leaving it enabled for 18 months until it's fully deprecated.
2. Solve any backwards compatibility concerns, since it's using the exact same library as before. This would avoid needing to get "consensus" from other browser makers, since they wouldn't be removing any features.
3. Be easy and straightforward to implement and maintain, since the extension is already written and browsers already bundle some extensions by default. Writing a replacement in Rust/another memory-safe language is certainly a good idea, but this solution requires far less effort.
This option was proposed to the Chrome developers, but was rejected for vague and uncompelling reasons [1].
> I think if the XSLT people really wanted to save it the best thing to do would have been to write a replacement in Rust.
That's already been done [2], but maintaining that and integrating it into the browsers is still lots of work, and the browser makers clearly don't have enough time/interest to bother with it.
From your [1] “rejected for vague and uncompelling reasons”:
>>> To see how difficult it would be, I wrote a WASM-based polyfill that attempts to allow existing code to continue functioning, while not using native XSLT features from the browser.
>> Could Chrome ship a package like this instead of using native XSLT code, to address some of the security concerns? (I'm thinking about how Firefox renders PDFs without native code using PDF.js.)
> This is definitely something we have been thinking about. However, our current feeling is that since the web has mostly moved on from XSLT, and there are external libraries that have kept current with XSLT 3.0, it would be better to remove 1.0 from browsers, rather than keep an old version around with even more wrappers around them.
The bit that bothers me is that Google continue to primarily say they’re removing it for security reasons, although they have literally made a browser extension which is a drop-in replacement and removes 100% of the security concerns. The people that are writing about the reasons know this (one of them is the guy that wrote it), which makes the claim a blatant lie.
I want people to call Google specifically out on this (and Apple and Mozilla if they ever express it that way, which they may have done but I don’t know): that their “security” argument is deceit, trickery, dishonest, grossly misleading, a bald-faced lie. If they said they want to remove it because barely anyone uses it and it will shrink their distribution by one megabyte, I would still disagree because I value the ability to apply XSLT on feeds and other XML documents (my Atom and RSS feed stylesheets are the most comprehensive I know of), but I would at least listen to such honest arguments. But falsely hiding behind “security”? I impugn their honour.
(If their extension is not, as their descriptions have implied, a complete, drop-in replacement with no caveats, I invite correction and may amend my expressed opinion.)
You still need to maintain that sandbox. Ultimately no one wants to spend energy maintaining software that isn't used very heavily. That's why feature depreciation happens. If someone cares enough, they should step in an offer to take over long term maintenance and fix the problems. Ideally a group of people, and perhaps more ideally, a group with some financial backing (eg a company), otherwise it may be difficult to actually trust that they will live up to the commitment.
Even projects like Linux deprecate old underused features all the time. At least the Internet has real metrics about API usage which allows for making informed decisions. Folks describing how they are part of that small fraction of users doesn't really change the data. What's also interesting is that a very similar group of people seem to lament about how it's impossible to write a new browser these days because there are too many features to support.
"The sandbox" in this case is their ability to execute WASM securely. It's a necessary part of the "modern" web. If they were planning on also nuking WASM from orbit because it couldn't be made secure, this would be another topic entirely. There's nothing they're maintaining just-for-xslt-1.0-support beyond a simple build of libxslt to WASM, a copy block in their build code, and a line in a JSON list to load WASM provided built-ins (which they would want anyway for other code).
"effecting something else" (i.e. escaping the sandbox) is the core issue. JavaScript (and WASM) engines have to be designed to defend against the user running outright malicious scripts without those scripts being able to gain access to the rest of the browser or the host system. By comparison, potentially exploitable but non-malicious, messy code is basically a non-issue. Any attacker that found a bug in a sandboxed XSLT polyfil that allowed them to escape the sandbox or do anything else malicious would be able to just ship the same code to the browser themselves to achieve the same effect.
I think their logic makes sense. They're removing support because of security concerns, and they're not adding support back using an extension because approximately nobody uses this feature.
Adding the support back via an extension isn't cost free.
I suppose that’s a legitimate framing. But I will still insist that, at the very least, their framing is deliberately misleading, and that saying “you can’t have XSLT because security” is dishonest.
But when it “isn’t cost-free”… they’ve already done 99.9% of the work required (they already have the extension, and I believe they already have infrastructure to ship built-in functionality in the form of Web Extensions—definitely Firefox does that), and I seem to recall hearing of them shifting one or two things from C/C++ to WASM before already, so really it’s only a question of whether it will increase installer/installed size, which I don’t know about.
According to the extension's README there are still issues with it, so they definitely would have to do more work.
And yeah Chrome is really strict about binary size these days. Every kB has to be justified. It doesn't support brotli compression because it would have added like 16kB to the binary size.
The easier thing might have been if Chrome & co opted to include any number of polyfills in JS bundled with the browser instead of making an odd situation where things just break.
I think you can recognize that the burden of maintaining a proven security nightmare is annoying while simultaneously getting annoyed for them over-grabbing on this.
Which would be a totally sensible thing you do. Especially if jpeg was a rarely used image format with few libraries supporting it, the main one being unmaintained.
There is already a replacement in rust but people like you and the Google engineers have ignored that fact. “Good luck” they all say turning their nose away from reality so they can kill it. Thanks for your support.
I'm aware I'm in a minority, but I find it sad that XSLT stalled and is mostly dead in the market. The amount of effort put into replicating most the XML+XPath+XSLT ecosystem we had as open standards 25 years ago using ever-changing libraries with their own host of incompatible limitations, rather than improving what we already had, has been a colossal waste of talent.
Was SOAP a bad system that misunderstood HTTP while being vastly overarchitected for most of its use cases? Yes. Could overuse of XML schemas render your documents unreadable and overcomplex to work with? Of course. Were early XML libraries well designed around the reality of existing programming languages? No. But also was JSON's early implementation of 'you can just eval() it into memory' ever good engineering? No, and by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.
RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.
There are still virtually zero good XML parsers but plenty of good JSON parsers so I do not buy your assertion. Writing a good JSON parser can be done by most good engineers, but I have yet to use a good XML parser.
This is based on my personal experience of having to parse XML in Ruby, Perl, Python, Java and Kotlin. It is a pain every time and I have run into parser bugs at least twice in my career while I have never experience a bug in a JSON parser. Implementing a JSON parser correctly is way simpler. And they are also generally more user friendly.
Take a look at C# / dotnet. The XML parser that's been around since the early 2000s is awesome, but the JSON libraries are just okay. The official JSON library leaves so much to be desired that the older, 3rd party library is often better.
Oooh, then it makes sense why there isn't a good set of layers:
XmlReader -> (XmlDocument or XmlSerializer) generally hits all use cases for serialization well. XmlReader is super-low-level streaming, when you need it. XmlDocument is great when you need to reason with Xml as the data structure, and XmlSerializer quickly translates between Xml and data structures as object serialization. There's a few default options that are wrong; but overall the API is well thought out.
In Newtonsoft I couldn't find a low level JsonReader; then in System.Text.Json I couldn't find an equivalent of mutable JObject. Both are great libraries, but they aren't comprehensive like System.Text.Json.
JSON parsing is pretty much guaranteed to be a nightmare if you try and use the numeric types. Or if you repeat keys. Neither of which are uncommon things to do.
My favorite is when people start reimplementing schema ideas in json. Or, worse, namespaces. Good luck with that.
> by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.
Furthermore, JSON has so many dangerously-incompatible implementations the errata for JSON implementations fills in multiple books, such as advice to "always" treat numbers as strings, popular datetime "extensions" that know nothing of timezones and so on.
Yes, but that's also everything you need to know in order to understand XML, and my experience implementing API is that every XML implementation is obviously-correct, because anyone making a serious XML implementation has demonstrated the attention-span to read a book, while every JSON implementation is going to have some fucking weird-thing I'm going to have to experiment with, because the author thought they could "get the gist" from reading two pages on a blog.
I think you are misreading the phrase "based on". The author, I believe, intends it to mean something like "descends from", "has its origins in", or "is similar to" and not that the ECMAScript 262 spec needs to be understood as a prerequisite for implementing a JSON parser. Indeed, IIRC the JSON spec defined there differs in a handful of respects from how JavaScript would parse the same object, although these might since have been cleaned up elsewhere.
JSON as a standalone language requires only the information written on that page.
Well yes, if you're writing a JSON parser in a language based on ECMAScript-262, then you will need to understand ECMAScript-262 as well as the specification for the language you're working with. The same would also apply if you were writing an XML parser in a language based on ECMAScript-262.
If you write a JSON parser in Python, say, then you will need to understand how Python works instead.
In other words, I think you are confusing "json, the specified format" and "the JSON.parse function as specified by ECMAScript-262". These are two different things.
> The same would also apply if you were writing an XML parser in a language based on ECMAScript-262.
Thankfully XML specifies what a number is and anything that gets this wrong is not implementing XML. Very simple. No wonder I have less problems with people who implement XML.
> In other words, I think you are confusing "json, the specified format" and "the JSON.parse function as specified by ECMAScript-262". These are two different things.
I'm glad you noticed that after it was pointed out to you.
The implications of JSON.parse() not being an implementation of JSON are serious though: If none of the browser vendors can get two pages right, what hope does anyone else have?
I do prefer to think of them as the same thing, and JSON as more complicated than two pages, because this is a real thing I have to contend with: the number of developers who do not seem to understand JSON is much much more complicated than they think.
XML does not specify what a number is, I think you might be misinformed there. Some XML-related standards define representations for numbers on top what the basic XML spec defines, but that's true of JSON as well (e.g. JSON Schema).
If we go with the XML Schema definition of a number (say an integer), then even then we are at the mercy of different implementations. An integer according to the specification can be of arbitrary size, and implementations need to decide themselves which integers they support and how. The specification is a bit stricter than JSON's here and at least specifies a minimum precision that must be supported, and that implementations should clearly document the maximum precisions that they support, but this puts us back in the same place we were before, where to understand how to parse XML, I need to understand both the XML spec (and any additional specs I'm using to validate my XML), plus the specific implementation in the parser.
(And again, to clarify, this is the XML Schema specification we're talking about here — if I were to just use an XML-compliant parser with no extensions to handle XSD structures, then the interpretation of a particular block of text into "number" would be entirely implementation-specific.)
I completely agree with you that there are plenty of complicated edge cases when parsing both JSON and XML. That's a statement so true, it's hardly worth discussion! But those edge cases typically crop up — for both formats — in the places where the specification hits the road and gets implemented. And there, implementations can vary plenty. You need to understand the library you're using, the language, and the specification if you want to get things right. And that is true whether you're using JSON, XML, or something else entirely.
> my experience implementing API is that every XML implementation is obviously-correct
This is not my experience. Just this week I encountered one that doesn’t decode entity/character references in attribute values <https://news.ycombinator.com/item?id=45826247>, which seems a pretty fundamental error to me.
As for doctypes and especially entities defined in doctypes, they’re not at all reliable across implementations. Exclude doctypes and processing instructions altogether and I’d be more willing to go along with what you said, but “obviously-correct” is still too far.
Past what is strictly the XML parsing layer to the interpretation of documents, things get worse in a way that they can’t with JSON due to its more limited model: when people use event-driven parsing, or even occasionally when they traverse trees, they very frequently fail to understand reasonable documents, due to things like assuming a single text node, ignoring the possibilities of CDATA or comments.
Exactly. In my experience, XML has thousands of ways to trip yourself while JSON is pretty simple. I always choose JSON APIs over XML if given the choice.
Try not to confuse APIs that you are implementing for work to make money, with random "show HN AI slop" somebody made because they are looking for a job.
FFS, have your parser fail on inputs it can not handle.
Anyway, the book defining XML doesn't tell you how your parser will handle values you can't represent on your platform either. And it also won't tell you how our parser will read timestamps. Both are completely out of scope there.
The only common issue in JSON that entire book covers is comments.
The SOAP specification does tell you how to write timestamps. It's not a single book, and doesn't cover things like platform limitations, or arrays. If you want to compare, OpenAPI's spec fills a booklet:
Aside from the other commenter's point about this being a misleading comparison, you didn't need to reinvent the whole XML ecosystem from scratch, it was already there and functional. One of the big claims I've seen for JSON though is that it has array support, which XML doesn't. And which is correct as far as it goes, but also it would have been far from impossible to code up a serializer/deserializer that let you treat a collection of identically typed XML nodes as an array. Heck, for all I know it exists, it's not conceptually difficult.
You need to distinguish between the following cases: `{}`, `{a: []}`, `{a:[1]}`, `{a:[1, 2]}`, `{a: 1}`. It is impossible to express in XML in an universal way.
XML is not a data serialisation tool, it is a language tool. It creates notations abd should be used to create phrase-like structures. So if a user needs these distinctions, he makes a notation that expresses them.
Basically the difference is that underlying data structures are different.
JSON supports arrays of arbitrary items and dictionaries with string keys and arbitrary values. It aligns well with commonly used data structures.
XML node supports dictionary with string keys and string values (attributes), one dedicated string attribute (name), array of nodes (child nodes). This is very unusual structure and requires dedicated effort to map to programming language objects and structures. There were even so-called "OXM" frameworks (Object-XML Mapper), similarly to ORM.
Of course in the end it is possible to build a mapping between array, dictionary and DOM. But JSON is much more natural fit.
XML is immediately usable if you need to mark up text. You can literally just write or edit it and invent tags as needed. As long as they are consistent and mark what needs to be marked any set of tags will do; you can always change them later.
XML is meant to write phrase-like structures. Structures like this:
int myFunc(int a, void *b);
This is a phrase. It is not data, not an array or a dictionary, although technically something like that will be used in the implementation. Here it is written in a C-like notation. The idea of XML was to introduce a uniform substrate for notations. The example above could be like:
This is, of course, less convenient to write than a specific notation. But you don't need a parser and can have tools to process any notation. (And technically a parser can produce its results in XML, it is a very natural form, basically an AST). Parsers are usually a part of a tool and do not work on their own, so first there is a parser for C, then an indexer for C, then a syntax highlighter for C and so on: each does some parsing for its own purpose, thus doing the same job several times. With the XML processing scenario is not limited to anything: the above example can be used for documentation, indexing, code generation, etc.
XML is a very good fit for niche notations written by few professionals: interface specifications, keyboard layouts, complex drawings, and so on. And it is being used there right now, because there are no other tool like it, aside from a full-fledged language with a parser. E.g. there is an XML notation that describes numerous bibliography styles. How many people need to describe bibliography styles? Right. With XML they start getting usable descriptions right away and can fine-tune them as they go. And these descriptions will be immediately usable by generic XML tools that actually produce these bibliographies in different styles.
Processing XML is like parsing a language, except that the parser is generic. Assuming you have no text content it goes in two steps: first you get an element header (name and attributes), then the child elements. By the time you get these children they are no longer XML elements, but objects created by your code from these elements. Having all that you create another object and return it so that it will be processed by the code that handles the parent element. The process is two-step so that before parsing you could alter the parsing rules based on the element header. This is all very natural as long as you remember it is a language, not a data dump. Text complicates this only a little: on the second step you get objects interspersed with text, that's all.
People cannot author data dumps. E.g. the relational model is a very good fit for internal data representation, much better than JSON. But there is no way a human could author a set of interrelated tables aside from tiny toy examples. (The same thing happens with state machines.) Yet a human can produce tons of phrase-like descriptions of anything without breaking a sweat. XML is such an authoring tool.
But the part of XML that is equivalent to JSON is basically five special symbols: angle brackets, quotes and ampersand. Syntax-wise this is less than JSON (and it even has two kinds of quotes). All the rest are extras: grammar, inclusion of external files (with name and position based addressing), things like element IDs and references, or a way to formally indicate that contents of an element are written in some other notation (e. g. "markdown").
Having used XSLT, I remember hating it with the passion of a thousand suns. Maybe we could have improved what we had, but anything I wanted to do was better done somehow else.
I'm glad to have all sorts of specialists on our team, like DBAs, security engineers, and QA. But we had XSLT specialists, and I thought it was just a waste of effort.
You can do some cool stuff, like serving an RSS file that is also styled/rendered in the browser. A great loss for the 2010 idea of semantic web. One corporation is unhappy because it does not cover their use cases
> RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.
Hope I can quote it to Transofrmer architecture One day
I don't really need or use XSLT (I think), so I am not really affected either way. But I am also growing mightily tired of Google thinking "I am the web" now. This is really annoying to no ends. I really don't want Google to didctate onto mankind what the web is or should be. Them killing off ublock origin also shows this corporate mindset at work.
This is also why I dislike AI browsers in general. They generate a view to the user that may not be real. They act like a proxy-gate, intercepting things willy-nilly. I may be oldschool, but I don't want governments or corporations to jump in as middle-man and deny me information and opportunities of my own choosing. (Also Google Suck, I mean Google Search, sucks since at the least 5 years now. That was not accidental - that was deliberate by Google.)
That sums up pretty much how I think about that. I don't have any opinion about XSLT either way... I'm just so tired. If Google decided to kill HTML tomorrow- who could stop them?
Google needs to be broken up into three or more companies. Search, Android, Chrome, and AdSense should not live together.
Lina Khan had the right idea and mandate, but she was too fucking slow.
When the Dems swing back into power, the gutting of big tech needs to be swift and thorough. The backbone needs to be severed. I'm screaming at my representatives to do this.
Google took over web tech, turned the URL bar into their Search product. They force brands to buy ads for their name brands - think about how much money they make by selling ads on the keywords "Airpods" or "Nintendo Switch". They forced removal of ad blocking tech unilaterally. They buy up all the panes of glass they don't already own. They don't allow you to install your own software on mobile anymore. And you have to buy ads for your app too, otherwise your competitor gets installed. If you develop software, you're perpetually taxed and have to do things their way. They're increasingly severing the customer relationship. They're putting themselves in as middle men in the payments industry, the automotive industry, the entertainment industry...
Look at how many products they've built and thrown away in the game of trying to broker your daily life.
I could go on and on and on... They're leeches. Giant, Galactus-sized leeches.
The bulk of the money they make is from installing themselves as middlemen.
And anyone thinking they're you're friends - they conspired to suppress wages, and they're actively cutting jobs and rebuilding the teams in India. Congrats, they love you. They're gutting America and are 100% anti-American. I love India and have nothing against its people, I'm just furious that this domestic company - this giant built on the backs of American labor and its population - hates its own country so much. (You know they hate us because they're still stuffing Corporate Memphis down our throat.)
Edit: I have to say one thing positively because Google makes me so negative. This website is beautiful. I was instantly transported back in time. But it's also a nice modern reinterpretation of retro web design. I love it so much.
Antitrust needs to make a comeback in general. I've been seeing meme-graphics about consolidation in various industries (like how most of the stuff on the shelves in grocery stores comes from like a half-dozen companies, even if there are 20 "brands" on the shelf making the market look healthier than it is, 18 of those 20 will actually be owned by those very-few companies; ditto media, telecom, et c.) my entire life.
Why did everything consolidate terribly in the '80s and '90s? Because we basically stopped enforcing antitrust in the '70s, due to Chicago School jackasses influencing policy and jurisprudence.
We need to undo their fake-pro-markets horse-shit and get back to having robust markets in every sector, not just software (but yes, certainly in software too). That'll require a spree of breaking up big companies across the economy.
One of the things that startled me when working for Google is how much of their decisionmaking actually looks like "This sucks and we don't want to be responsible for it... But there isn't anyone else who can be, so I guess it's us."
I'm not saying this is optimal or that it should be the way it is, but I am saying there are problems with alternative approaches that need to be addressed.
To give a comparison: OpenGL tried a collaborative and semi-open approach to governance for years, and what happened was they got more-or-less curb-stomped by DirectX, so much so that it drove Windows adoption for years as "the architecture for playing videogames." The mechanism was simple: while OpenGL's committee tried to find common ground among disparate teams with disparate needs, Microsoft went
1) we control this standard; here are the requirements you must adhere to
2) we control the "DirectX" trademark, if you fail to adhere to the standards we decertify your product.
As a result, you could buy a card with "DirectX" stamped on it, slap it into your Windows machine, and it would work. You couldn't do anything like that with OpenGL hardware; the standard was so loose (and enforcement so nonexistant) that companies could, via the "gestalt" feature-detection layer, claim a feature was supported if they had polyfilled a CPU-side software renderer for it. Useless for games (or basically any practical application), but who's gonna stop them from lying?
Browsers aren't immune to market forces; a standard that is too inflexible or fails to reflect the actual implementation pressures and user needs will be undercut by alternative approaches.
I'm not saying current governance of the web is that bad, but I bring up the history of OpenGL as an example of why an open, cooperative approach can fail and the pitfalls to watch out for. In the case of this specific decision regarding XSLT, it appears from the outside looking in that the decision is being made in consensus by the three largest browser engine developers and maintainers. What voice is missing from that table, and who should speak for them?
(Quick side-note: Apple managed to dodge a lot of the OpenGL issues by owning the hardware stack and playing a similar card to Microsoft's with different carrots and sticks: "This is the kernel-level protocol you must implement in hardware. We will implement OpenGL in software. And if your stuff doesn't work we just won't sell laptops with your card in them; nobody in this ecosystem replaces their graphics hardware anyway").
Not suggesting an alternative model here, but I think that Google et. al (based on my own time working on Chrome) don't take that responsibility quite as seriously as they should. Being responsible may be an accident, but being dominant in any given area is not. The forces inside Google which take over parts of the world do so without really caring about the long term commitment.
It is so possible to preserve XSLT and other web features e.g. by wrapping them in built-in (potentially even standardized) polyfills, but that kind of work isn't incentivized over new features and big flashy refactors.
Completely agree. Among the reasons I no longer work for Google is that I could not escape the perception that they were the 800-lb gorilla in the room and deeply uncomfortable with taking on any responsibiliy given that circumstance.
When you are the biggest organization in a space, it's your space whether you feel qualified to lead or not. The right course of action is "get qualified, fast." The top-level leadership did not strike me as willing to shoulder that responsibility.
My personal preferred outcome to address the security concerns with XSLT would probably be to replace the native implementation with a JavaScript-sandboxed implementation in-browser. This wouldn't solve all issues (such an implementation would almost certainly be operating in a privileged state, so there would still be security concerns), but it would take all the "this library is living at a layer that does direct unchecked memory manipulation, with all the consequences therein" off the table. There is, still, a case to be made perhaps that if you're already doing that, the next logical step is to make the whole feature optional by jettisoning the sandboxed implementation into a JavaScript library.
With browser being as complicated as they are, I kind of support this decision.
That said, I never used XSLT for anything, and I don’t see how is its support in browsers tied to RSS. (Sure you could render your page from your rss feed but that seems like a marginal use case to me)
The lack of the jump scare cookie banner on the XSLT version is certainly an improvement, but I otherwise agree. Google search burying XSLT driven pages isn't a surprise given their stance.
I think Google has a general philosophy of the web that promotes crawlable HTML over other formats. I noticed recently that traditional job aggregators like XML job feeds, yet Google promotes JobSchema as an incompatible standard. So less that Chromium directs pagerank, and more that Google's genreral view of the web is HTML over XML. I hope JobSchema fails because it is harder to aggregate, unless you already index web pages at scale.
Although I don't have firm evidence, haven't worked at Google, and you likely know company dynamics better than I.
Sure there are examples of websites using XSLT, but so far I've only seen the dozen or maybe two dozen, and it really looks like they are extremely rare. And I'm pretty sure the EU parliament et. al. will find someone to rework their page.
This really is just a storm in a waterglass. Nothing like the hundreds or tens of thousands of flash and java applet based web pages that went defunct when we deprecated those technologies.
Those had good rationale for deprecating that I would say don't apply in this instance. Flash and Java applets were closed, insecure plugins outside the web's open standards, so removing them made sense. XSLT is a W3C standard built into the web's data and presentation layer. Dropping it means weakening the open infrastructure rather than cleaning it up.
> This really is just a storm in a waterglass. Nothing like the hundreds or tens of thousands of flash and java applet based web pages that went defunct when we deprecated those technologies.
Sure, but Flash and Java were never standards-compliant parts of the web platform. As far as I'm aware, this is the first time that something has been removed from the web platform without any replacements—Mutation Events [0] come close, but Mutation Observers are a fairly close replacement, and it took 10 years for them to be fully deprecated and removed from browsers.
They are definitely rare. And I suspect that if you eliminate government web sites where usage of standards is encouraged, if not mandated, the sightings “in the wild” are very low. My guess would be less than 1% of sites use XSLT.
If there were that many, why do people only list the same handful again and again? And where are all the /operators/ of those websites complaining? Is it possible that installing an XSLT processor on the server is not as big a hassle as everyone pretends?
Again: this is nothing like Flash or Java applets (or even ActiveX). People were seriously considering Apple's decision to not support Flash on iPhone as a strategic blunder due to the number of sites using it. Your local news station probably had video or a stock market ticker using Flash. You didn't have to hunt for examples.
> If there were that many, why do people only list the same handful again and again? And where are all the /operators/ of those websites complaining?
I've spent the last several years making a website based on XML and XSLT. I complain about the XML/XSLT deprecation from browsers all the time. And the announcements in August that Google was exploring getting rid of XSLT in the browser (which, it turned out, wasn't exploratory at all, it was a performative action that led to a foregone conclusion) was so full of blowback that the discussion got locked and Google forged ahead anyway.
> Is it possible that installing an XSLT processor on the server is not as big a hassle as everyone pretends?
This presumes that everyone interested in making something with XML and XSLT has access to configure the web server it's hosted on. With support in the browser, I can throw some static files up just about anywhere and it'll Just Work(tm)
Running a script that interprets a different script to transform a document just complicates things. What do I do when the transform fails? I have to figure out how to debug both XSLT and JavaScript to figure out what broke.
I don't have any desire to learn JavaScript (or use someone else's script) just to do some basic templating.
What does one do when transform fails right now? You have to debug both XSLT and a binary you don't have the source for; debugging JavaScript seems like a step up, right?
I used to be able to load the local XML and XSLT files in a browser and try it. When the XSLT blew up, I'd get a big ASCII arrow pointing to the part that went 'bang'. It still only kind of works in FireFox
XML Parsing Error: mismatched tag. Expected: </item>.
Location: https://example.org/rss.xml
Line Number 71, Column 3:
</channel>
--^
Chrome shows a useless white void.
I enabled the nginx XSLT module on a local web server serve the files to myself that way. Now when it fails I can check the logs to see what instruction it failed on. It's a bad experience, and I'm not arguing otherwise, but it's just about the only workaround left.
It's a circular situation: nobody wants to use XSLT because the tools are bad and nobody wants to make better tools because XSLT usage is too low.
> Then write it in languages that have debuggers, instead of XSLT.
Up until a few years ago, I could debug basic stuff in FireFox. If Firefox encountered an XSLT parsing error, it would show an error page with a big ASCII arrow pointing to the instruction that failed. That was a useful clue. Now it shows a blank page, which is not useful at all.
In Safari at least clicking a rss link prompts you to open it in a rss reader, which I think is a superior experience. Reading a rss feed in browser is not without use, but I’d argue that that’s mostly the job of the site itself.
The sites sometimes want to provide some special formatting on top of the RSS without modifying it. For example, you might point people to available RSS readers which may not be installed or provide other directions to end users. RSS feeds are used in places other than reading apps. I've seen people suggest that this transformation could be done server-side, but that would modify the RSS feed which needs to be consumed.
Technically yes, but so what? The RSS use case is almost the only thing XSLT can uniquely provide (at the moment). Every other use case of XSLT can be done in other ways, including the use of server-side XSLT processors.
This is useful because feed URLs look the same as web page URLs, so users are inclined to click on them and open them in a web browser instead of an RSS reader. (Many users these days don't even know what an RSS reader is). The stylesheet allows them to view the feed in the browser, instead of just being shown the XML source code.
Why is this so critical? We dont due this for any other format. If you put an ms office document on a page, we dont have the browser render it, we download it and pass it off to a dedicated program. Why is RSS so special here?
Because we want RSS to be friendly to new users. If you display a RSS feed as a wall of XML text, no new user will understand. If you just make it so clicking a RSS link brings up a blurb about RSS is & links on how to use, they might understand.
And we have done it for other formats: PDF is now quite well supported in browsers without plugins/etc.
An RSS feed is not a document meant for viewing. It's not like PDF or HTML or a video.
It's a format intended for be consumed like an API call. It's like JSON. The link is something you import into an aggregator.
RSS feeds shouldn't even be displayed as XML at all. They should just be download links that open in an aggregator application. The same way .torrent files are imported into a torrenting client, not viewed.
1. This is pretty difficult for someone who doesn't know about RSS. How would they ever learn what to do with it?
2. Browsers don't do that. There used to be an icon in the URL bar when they detected an RSS feed. It would be wonderful if browsers did support doing exactly what you suggest. I'm not holding my breath.
I'm not looking to replicate my blog via XSLT of the RSS feed: that's what the blog's HTML pages are. I just don't want to alienate non-RSS users.
People learn what to do with RSS the same as with anything else. They look it up or someone tells them. It's not like a .psd file tells you what it is, if you don't have Photoshop installed.
I don't think you need to worry about "alienating" non-RSS users. If somebody clicks on an RSS link without knowing what RSS is and sees gibberish, that's not really on you. They can just look it up. Or if you want, you can put a little question-mark icon next to the RSS link if you want to educate people. But mostly, for feeds and social media links, people just ignore the icons/acronyms they don't recognize.
And XSLT in that context is interesting as one can ship the RSS file, the web browser renders it with XSLT to human readable and a smart browser can do smart things with it. All from the same file.
"Semantic" means making all distinctions you care about and not making any distinctions you do not care about. This means a custom notation for nearly every case. XML is such a tool. And XSLT is a key component to make all these notations compatible with each other.
That is not what "semantic web" means. Semantic web was a series of standards (rdf and friends) made by w3c from the early 2000s that didnt really catch on.
Ok but maintaining a web browser that supports a ton of small features that nobody-except-me-and-my-cousin are using has a huge cost; you don’t support obscure features just because someone somewhere is relying on it (relevant: https://xkcd.com/1172/).
Why would Google keep supporting AMP if the line is drawn only by use?
They chose to kill off a spec and have it removed from every browser because they don't like it. They choose to keep maintaining AMP because its their pet project and spec. Its as simple as that, it has nothing to do with limited resources forcing them to trim features rather than maintain or improve them.
Well, IMO it would be cool if we could do that, but the MS Office formats are a lot more complicated so it's a lot more work to implement. Also, quite often the whole point of sharing a file in MS Office format is so that the user can take it and edit it, which would require a dedicated program anyway.
If you think about it, basically nothing except HTML is a critical function of browsers. You can solve everything just with that. We don’t even need CSS, or any custom styling at all. JavaScript is absolutely not necessary.
> font, color, and others are no longer in HTML5 spec.
Sometimes browsers are asked to render HTML documents that were written decades ago to conform to older specs and are still on the internet. That still works
> JS is not there just for client side static DOM rendering. Something like Google Maps or an IRC chat would be a much poorer experience without it.
Of course they would. That's most of the point. You can do a lot more damage with JavaScript than you currently can with XSLT, but XSLT has to go because of 'security concerns'
I don't think it's a critical feature, but it is nice-to-have.
Imagine if you opened a direct link to a JPEG image and instead of the browser rendering it, you'd have to save it and open it in Photoshop locally. Wouldn't that be inconvenient?
Many browsers do support opening web-adjacent documents directly because it's convenient for users. Maybe not Microsoft Word documents, but PDF files are commonly supported.
Yeah, but browsers actually make use of that format. And its not like you can add a special header to jpg files to do custom reformatting of the jpeg via a turing complete language. Browsers just display the file.
You can do the transformation server-side, but it's not trivial to set it up. It would involve detecting the web browser using the "Accept" header (hopefully RSS readers don't accept text/html), then using XSLT to transform the XML to XHTML that is sent to the client instead, and you probably need to cache that for performance reasons. And that's assuming the feed is just a static file, and not dynamically generated.
In theory you could do the transformation client side, but then you'd still need the server to return a different document in the browser, even if it's just a stub for the client-side code, because XML files cannot execute Javascript on their own.
Another option is to install a browser extension but of course the majority of users will never do that, which minimizes the incentive for feed authors to include a stylesheet in the first place.
How about using Javascript to fetch the XML (like you would do with JSON), and then parse/transform it with a Javascript or wasm XSLT library? Just like you would do with JSON.
You need a server to serve Json as well. Basically, see XML as data format.
RSS readers are not chrome, so they have their own libraries for parsing/transforming with XSLT.
Not without servers rendering the HTML or depending on client-side JS for parsing and rendering the content.
Its also worth noting that the latest XSLT spec actually supports JSON as well. Had browsers decided to implement that spec rather than remove support all together you'd be able to render JSON content to HTML entirely client-side without JS.
This site is a bit of a Rorschach test as it plays both sides of this argument: bad Google for killing XSLT, and the silliness of pushing for XSLT adoption in 2025.
"Tell your friends and family about XSLT. Keep XSLT alive! Add XSLT to your website and weblog today before it is too late!"
I already have XSLT in my website because I have an Atom feed and XSLT is the only way to serve formatted Atom/RSS feeds in a static site. Perhaps you have never considered the idea that someone might want to purchase some cheap static hosting to serve their personal website, but it is a fine way to do things. This change pries the web ever further out of the hands of common people and into the big websites that just want the browser to serve their apps.
How do you intend to put PHP in an RSS document? If it serves an HTML one instead, then the RSS will no longer be available. You could try checking the HTTP headers to determine if the page is being fetched by an RSS reader or a browser, but such an approach is much more brittle than XSLT, which solves the problem exactly and easily. Not to mention it allows users to download browser extensions that override the provided formatting of XSLT documents with a custom standard one if they desire.
Not every application will set these correctly. It is less reliable than simply serving a static page. And that's the core of this issue. Before XSLT was removed, your website could be a directory of static content. You can put it in a zip file and send it anywhere you want. Now, even the most basic website (blog + feed) will require some dynamic content to work properly. We go from a world where static hosting is possible to one where it's less possible, and all because some browser implementors couldn't be bothered to upgrade a library to a safe version.
This would also break the workflow I have for my site, where I build it as a static directory locally during development and point Python's trivial HTTP server at it to access the content over localhost.
And it's totally insulting because the people removing this have created a (memory safe!) browser extension that lets you view XSLT documents, and put special logic in the browser to show users a message telling them to download that extension when an XSLT-styled document is loaded. They should bundle the extension with the browser instead of breaking my website and telling users where to fix it.
It's perfectly reasonable (and much more maintainable and powerful) to use client side JavaScript on a static site to transform Atom or RSS into HTML.
If your argument is that you don't want to use JavaScript because it's Turing complete and insecure and riddled with bugs and security holes, then why the fuck are you using XSLT?
RSS documents do not support JavaScript. Also, XSLT is not Turing complete as far as I know, though some implementations extend the spec to become Turing complete. Even if it is, a potentially Turing complete XSLT document does not present the same kinds of risks as JavaScript does. Do you think someone will be able to fingerprint your browser using XSLT? I'll file that under “highly unlikely”. Specter and meltdown also aren't exactly going to work in XSLT. There are memory-safe XSLT parsers available and existing parsers can be run in a memory-safe WASM sandbox, so that's not really a concern either.
But as you obviously know, HTML documents do support JavaScript, and there's no reason to link to a raw XML RSS or Atom document directly, so problem solved. If you're so cautious you refuse to enable JavaScript, then you have absolutely no justification for enabling XSLT.
Handwaving that vulnerabilities are "highly unlikely" is dangerous security theater. It doesn't matter how unlikely you guess and wish they are, they just have to be possible. And the fact that the XSLT 1.0 implementations built into browsers are antique un-sandboxed memory-unsafe C++ code make vulnerabilities "highly likely", not "highly unlikely", which the record clearly proves.
Browsers only natively support the ancient version of XSLT 1.0, so if you need a less antiquated version, you should use a modern memory safe sandboxed polyfill, or process it on the server side, or more safely not use XSLT at all and simply use JavaScript instead (simply transforming RSS to HTML directly with JavaScript is a MUCH smaller and harder attack surface than the massive overkill of including an entire sandboxed general purpose Turing complete XSLT processor), instead of foolishly relying on non-sandboxed old untrustworthy poorly maintained C++ code built into the browser.
Of course all versions of XSLT are Turing complete, as you can easily confirm on Wikipedia, and which is quite obvious if you have ever read the manual and used it. It has recursive template calls, conditionals, variables and parameters, pattern matching and selection, text and node construction, unbounded input and recursion depth, etc. So how could it possibly not be Turing complete, since it has the same expressive power of functional programming languages? And that should be quite obvious to anyone who knows XSLT and basic CS101, at a glance, without a formal proof.
>While XSLT was originally designed as a special-purpose language for XML transformation, the language is Turing-complete, making it theoretically capable of arbitrary computations.
Do you recall the title of Chrome's web page explaining why they're removing XSLT? "Removing XSLT for a more secure browser" (aka "Bin Ladin Determined To Strike in XSLT" ;). Didn't you read that article, and the recent HN discussion about it? You can't just claim nobody warned you, like GW Bush tried to do.
>The continued inclusion of XSLT 1.0 in web browsers presents a significant and unnecessary security risk. The underlying libraries that process these transformations, such as libxslt (used by Chromium browsers), are complex, aging C/C++ codebases. This type of code is notoriously susceptible to memory safety vulnerabilities like buffer overflows, which can lead to arbitrary code execution. For example, security audits and bug trackers have repeatedly identified high-severity vulnerabilities in these parsers (e.g., CVE-2025-7425 and CVE-2022-22834, both in libxslt). Because client-side XSLT is now a niche, rarely-used feature, these libraries receive far less maintenance and security scrutiny than core JavaScript engines, yet they represent a direct, potent attack surface for processing untrusted web content. Indeed, XSLT is the source of several recent high-profile security exploits that continue to put browser users at risk. The security risks of maintaining this brittle, legacy functionality far outweighs its limited modern utility. [...]
Your overconfidence in XSLT's security in browsers is unjustified and unsupported by its track record and reputation, its complexity is extremely high, it's written in unsafe un-sandboxed C/C++, it gets vastly less attention and hardening and use than JavaScript, and its vulnerabilities are numerous and well documented.
Examples:
CVE‑2025‑7425: A heap use-after-free in libxslt caused by corruption of the attribute type (atype) flags during key() processing and tree-fragment generation. This corruption prevents proper cleanup of ID attributes, enabling memory corruption and possibly arbitrary code execution.
CVE‑2024‑55549: Another use-after-free in libxslt (specifically xsltGetInheritedNsList) disclosed via a Red Hat advisory.
CVE‑2022‑22834: An XSLT injection vulnerability in a commercial application (OverIT Geocall) allowing remote code execution from a “Test Trasformazione XSL” feature. Shows how XSLT engines/processors can be attack surfaces in practice.
CVE-2019-18197: (libxslt 1.1.33)
In the function xsltCopyText (file transform.c) a pointer variable isn’t reset in certain flows; if the memory area was freed and reused, a bounds check could fail and either write outside a buffer or disclose uninitialised memory.
CVE-2008-2935: buffer overflows in crypto.c for libexslt.
CVE-2019-5815: type confusion in xsltNumberFormatGetMultipleLevel, repeated memory safety flaws (heap/stack corruption, improper bounds checks, pointer reuse) in the library over many years.
>and there's no reason to link to a raw XML RSS or Atom document directly
What's the point of having an Atom feed if I can't give people a link to it? Do you just expect me to write “this website has an atom feed” and have only the <link> element invisibly pointing at it? That is terrible UX. And then what if I want to include a link to my feed in a message to share it with someone?
>Handwaving that vulnerabilities are "highly unlikely" is dangerous security theater
No it isn't. There are memory safe XSLT implementations. Not so for JavaScript. This is because XSLT is a simple language and JavaScript a complicated one. You are trying to make the case that XSLT is inherently unsafe because poor implementations of it exist, yet it is actually much safer because safe implementations exist and are easy to write. It can initiate no outgoing internet connections, cannot read from memory directly, cannot do any of the things that makes JavaScript inherently dangerous.
>simply transforming RSS to HTML directly with JavaScript is a MUCH smaller and harder attack surface than the massive overkill of including an entire sandboxed general purpose Turing complete XSLT processor
Firstly, you can't include JavaScript tags in RSS or Atom so my website would not conform to any web standard. Secondly, by using JavaScript, I'm demanding that my users enable a highly dangerous web feature that has been the basis for many attacks. By using XSLT, I'm giving them the option to use a much smaller interface with safer implementations available. How many CVEs have their been in JavaScript runtimes compared with XSLT? And finally, browser developers should just bundle one of these JavaScript polyfills and activate it for documents with stylesheets if they are so easy to use. Demanding that users deviate from web standards to get simple features like XML styling is ridiculous and it would clearly be little effort for them to silently append a polyfill script to documents with XSLT automatically. If that's the only way they can make it secure, that's what they should do.
>Your overconfidence in XSLT's security in browsers is unjustified and unsupported by its track record and reputation, its complexity is extremely high, it's written in unsafe un-sandboxed C/C++, it gets vastly less attention and hardening and use than JavaScript, and its vulnerabilities are numerous and well documented.
I have no confidence at all in browsers' implementations of XSLT because they admit they use a faulty library. I have absolute confidence that it would be little effort to replace the faulty library with a correct one, and that doing so would be miles safer than expecting users to enable JavaScript.
>Of course all versions of XSLT are Turing complete, as you can easily confirm on Wikipedia
Do not quote Wikipedia as a source. In this case, the provided source in the Wikipedia page claims only that version 2.0 is Turing complete, and this claim is erroneous, based on a proprietary extension of certain XSLT processors but not that used in Chrome.
It is quite frankly ridiculous to me that people are bending over backwards to suggest that XSLT is somehow an inherent security risk when you can include a JavaScript fragment in pages to trigger an XSLT processor. Whatever risk is posed by XSLT is a clear subset of that posed by JavaScript for this reason alone. You will never see a complete JavaScript implementation in XSLT because it isn't possible. One language is given greatly more privileged access to the resources and capabilities of the user's computer than the other.
The decision of web-browser to include faulty XSLT libraries when safe ones exist is the source of risk here, and not these same people who have been putting users at risk in a billion different ways over the years come to me and suggest that I have to remove a completely innocuous feature from my website and replace it with a more dangerous one while breaking standards compliance because they can't be bothered to switch from an unsafe implementation to a safe one.
IMHO, Google had become the most powerful tech company out there! It has a strong monopoly in almost every aspect of our lives and it is becoming extremely difficult to completely decouple from it. My problem with this is that it now dictates and influences what can be done, what is allowed and what not, and, with its latest Android saga (https://news.ycombinator.com/item?id=45017028), it's become worrying.
So why is almost nobody here actually defending it on its own merits? In my opinion XSLT was a bad idea ~20 years ago when I started in web development. It was convoluted, not nice to work with and the implementations buggy.
Most people seem to think it is bad because it is Google who want to remove it. Personally I just see Google finally doing something good.
There is so much defense of XSLT it’s crazy you assume no one is here defending it. This thread isn’t the single defense point against Google.
Not only that Google engineers Mason Freed has shown pretty forcefully that he will not listen to defense, reason or logic. This further evidenced by Google repeatedly trying to kill it for 25 years.
End of an era! I remember going through XSLT tutorials many decades ago and learning everything there was to learn about this curious technology that could make boring XML documents come 'alive'. I still use it to style my RSS feeds, for example, <https://susam.net/feed.xml>. It always felt satisfying that an XML file with a stylesheet could serve as both data and presentation.
Keeping links to the original announcements for future reference:
I know that every such feature adds significant complexity and maintenance burden, and most people probably don't even know that many browsers can render XSLT. Nevertheless, it feels like yet another interesting and niche part of the web, still used by us old-timers, is going away.
If Google cured cancer tomorrow, there's someone that would be complaining about it and adding "cancer" to the "killed by Google" list. I would be very surprised if smaller browser vendors were happy about having to maintain ancient XSLT code, and I doubt new vendors were planning on ever adding support. Good riddance.
The post specifically calls out Apple and Mozilla as wanting to get rid of XSLT support, but just insinuates that this is because Google is paying them off. Obviously I think Google's monopoly position and backroom dealings are bad, but I also think that's completely unrelated, and that the more likely explanation for the other mainstream vendors wanting to get rid of XSLT is that it's a feature virtually no one uses and is likely a maintenance burden for the other non-Chromium browsers.
> Smaller browser vendors already pick and choose the features they support.
If there weren't a gazillion features to support, maybe there would be more browsers. I think criticizing Google and other vendors for _adding_ tons of bloat would be a better use of time.
I know you're being sarcastic, but to be pedantic WebGPU (usually) uses canvas. Canvas is the element, WebGPU is one of the ways of rendering to a canvas, in addition to WebGL and CanvasRenderingContext2D.
And even that isn't enough; no browser supports WebGPU on all platforms out of the box. https://caniuse.com/webgpu
Chrome supports it on Windows and macOS, Linux users need to explicitly enable it. Firefox has only released it for Windows users, support on other platforms is behind a feature flag. And you need iOS 26 / macOS Tahoe for support in Safari. On mobile the situation should be a bit better in theory, though in my experience mobile device GPU drivers are so terrible they can't even handle WebGL2 without huge problems.
thanks for this, you made my day! i never bothered to look.
i still remember when tables were forced out of fashion by hordes of angry div believers! they became anathema and instantly made you a pariah. the arguments were very passionate but never made any sense to me: the preaching was separating structure from presentation, mostly to enable semantics, and then semantics became all swamped with presentation so you could get those damned divs aligned in a sensible way :-)
just don't use (or abuse) them for layout but tables still seem to me the most straightforward way to render, well, tabular content.
While I agree with the sentiment, I loathe these "retro" websites that don't actually look like how most websites looked back then. It's like how people remember the 80s as neon blue and pink when it was more of a brownish beige.
>While I agree with the sentiment, I loathe these "retro" websites that don't actually look like how most websites looked back then.
Countless websites on Geocities and elsewhere looked just like that. MY page looked like that (but more edgy, with rotating neon skull gifs). All those silly GIFs were popular and there were sites you could find and download some for personal use.
>It's like how people remember the 80s as neon blue and pink when it was more of a brownish beige.
In North Platte or Yorkshire maybe. Otherwise plenty of neon blue and pink in the 80s. Starting from video game covers, arcades, neon being popular with bars and clubs, far more colorful clothing being popular, "Memphis" style graphic design, etc.
The brown, beige, and dark orange were extremely prevalent in the 80s --- but a lot of that was a result of the fact that most things in your environment are never brand new; the first half of the 80s was mostly built in the second half of the 70s.
This look with animations and bright text on dark repeated backgrounds was definitely popular for a while in the late 90s. You wouldn’t see it on larger sites like Yahoo or CNN, but it was definitely not unheard of for personal sites.
Gray backgrounds where also popular, with bright blue for unvisited links and purple for visited links. IIRC this was inspired by the default colors of Netscape Navigator 2.
> IIRC this was inspired by the default colors of Netscape Navigator 2.
"Inspired" is an interesting word for "didn't set custom values." And I believe Mosaic used the same colors before. I'm not even sure when HTML introduced the corresponding attributes (this was all before CSS ...)
If there is no white 1x1 pixel that is stretched in an attempt to make something that resembles actual layout, or multiple weird tables, I always ask: are they even trying.
In all seriousness- they got quite a good run with xslt. Time to let it rest.
1x1 pixels for padding and aligning were absolutely a thing in the late 90s (1997+). Don't know what alternative history you have in mind, but it was used at the "table layout" era.
What came later was the float layout hell- sorry, "solution".
The 1x1 pixel gif hack arrived shortly after Netscape 1.1 introduced tables. I belive this was before colored text and tiled backgrounds became available. So the hack is definitely part of the “golden age” of web design.
I once got into a cab in NYC on Halloween and the driver said to me, hey, you really nailed that 80s hairstyle, thinking I had styled it for Halloween. I had to tell him dude, I’m from the 80s.
Worth noting XSLT is actually based on DSSSL, the Scheme-based document transformation and styling language of SGML. Core SGML already has "link processes" as a means to associate simple transforms/renames reusing other markup machinery concepts such as attributes, but is also introducing a rather low-level automaton construct to describe context-dependent and stateful transformations (the kind of which would've be used for recto/verso rendering on even/odd print pages).
I think it's interesting because XSLT, based on DSSSL, is already Turing-complete and thus the XML world lacked a "simple" sub-Turing transformation, templating, and mapping macro language that could be put in the hands of power users without going all the way to introduce a programming language requiring proper development cycles, unit testing, test harnesses, etc. to not inevitably explode in the hands of users. The idea of SGML is very much that you define your own little markup vocabulary for the kind of document you want to create at hand, including powerful features for ad-hoc custom Wiki markup such as markdown, and then create a canonical mapping to a rendering language such as HTML; a perspective completely lost in web development with nonsensical "semantic HTML" postulates and delivery of absurd amounts of CSS microsyntax.
As a youngster entering the IT professional circles, I was enamoured with SGML: creating my own DTDs for humane entry for my static site generator, editing my SGML source document with Emacs sgml-mode. I worked on TEI and DocBook documents too (and was there something related to Dewey coding system for libraries?).
However, processing fully compliant SGML, before you even introduce DSSSL into the picture, was a nightmare. With only one open source and at the same time the only fully compliant parser (nsgml), which was hard to build on contemporary systems, let alone run, really using SGML for anything was an exercise in frustration.
As an engineering mind, I loved the fact you could create documents that are concise yet meaningful, and really express the semantics of your application as efficiently as possible. But I created my own parsers for my subset, and did not really support all of the features.
HTML was also redefined to be an SGML application with 4.0.
I originally frowned on XML as a simplification to make it work for computers vs for humans, but with XML, XSLT, Xpath... specs, even that was too complex for most. And I heavily used libxml2 and libxslt to develop some open source tooling for documentation, and it was full of landmines.
All this to say that SGML has really spectacularly failed (IMO) due to sheer flexibility and complexity. And going for "semantic HTML" in lieu of SGML + DSSSL or XML + XSLT was really an attempt to find that balance of meaning and simplicity.
It's the common cycle as old as software engineering itself.
I stand corrected: HTML was defined as an SGML application from the very first published version in 1993 (https://www.w3.org/MarkUp/draft-ietf-iiir-html-01.txt), but I know the original draft in 1990-91 was heavily SGML inspired even if it didn't really conform to the spec (nor provide a DTD). Thanks for pointing this out, it's funny how memory can play games on us :)
While HTML is clearly the most used document markup language there has ever been, almost nobody is using an SGML-compliant parser to parse and process it, and most are not even bothering with the DTD itself; not to mention that HTML5 does not provide a DTD and really can't even be expressed with an SGML DTD.
So while HTML used to be one of SGML "applications" (document types, along with a formal definition), on the web it was never treated as such, but as a very specific language that is inspired by SGML and only inspired by the spec too (since day 1, all browsers accepted "invalid" HTML and they still do).
Ascribing the success to SGML is completely backwards, IMHO: HTML was successful despite it being based on SGML, and for all intents and purposes, majority never really cared about the relationship.
But did it ever actually work in practice? As I remember it the XSLT backed websites still needed "absurd amounts of CSS microsyntac". You could not do everything you needed with XSLT so you needed to use both XSLT and CSS. Also coding in XSLT was generally painful, even more so than writing CSS (which I think is another poorly designed language).
It is all well and good to talk about theoretical alternatives that would have been better but we are talking here about a concrete attempt which never worked beyond trivial examples. Why should we keep that alive because of something theoretical which in my opinion never existed?
XSLT is template language. CSS is styling language. They have nothing to do with each other. You have data in some XML-based format. You write template using XSLT to transform that data into HTML. And then you use CSS to make that HTML look pretty. These technologies work very well with each other.
Completely correct and the operative phrase here is “absurd amounts” which actually captures our entire contemporary computing stack in almost every dimension that matters.
The entire point of markup attributes is to contain rendering hints that themselves aren't rendered to the user as such. Hell, angle-bracket markup itself was introduced to unify and put a limit to syntactic proliferation. But somehow "we" arrived at creating the monstrosity that is CSS and then even to put CSS and JS into inline element content with bogus special escaping and comment parsing rules rather than into attributes and external resources.
The enormous proliferation of syntax and super-complicated layout models doesn't stop markup haters to cry wolf because entities (text macros) represent a security risk in markup however; go figure.
It's interesting that we don't have a replacement for this use case. For me, XSLT hits a sweet spot where I can send a machine-parsable XML document and a small XSLT sheet from dirt cheap static web hosting (where I cannot perform server-side transforms, or control HTTP headers). This is fairly minimal and avoids needing to keep multiple files in sync.
I could add a polyfill, but that adds multiple MB, making this approach heavyweight.
> The implementation is all agnostic about namespaces. It just expects XSLT elements to have tags that carry the xsl: prefix, but we disregard all namespace declaration for them.
Just for a start. It's a tiny polyfill for a tiny subset of the thing that is XSLT 1.0.
I see, thank you -- so it's not the JavaScript part, it's basically 99% a huge WASM blob. I can understand not wanting to include something like that, yikes.
XSLT was the only convenient way to create a static website without JS. Other ways either require build step or server-side applications. With XSLT, you could write data into XML files, templating into XSL files and it'll just work.
Of course you can achieve similar effects with JS, by downloading data files and rendering them into whatever HTML you want. But that cuts users without enabled JS.
Not a huge loss, I guess, given the lack of popularity of these technologies. But loss nonetheless. One more step to bloated overengineered web.
Users who disable JS are insane and hypocritical if they don't also disable XSLT, which is even worse. So I wouldn't bend over too far backwards to support insane hypocrites. There aren't enough of them to matter, they enjoy having something to complain about, and they're much louder and more performative than the overwhelming majority of users. Not a huge loss cutting them out at all.
I haven't been too chatty about it but the furor over this being removed has, I suspect, everything to do with there being no real plan to replace what it does. No I don't just mean styling RSS feeds. I mean writing websites as semantic documents!! The whole thing the web is (was) about!
Don’t… you’re forgetting the Christmas of ’02 when cousin Marvin brought up the issue of Tabs vs Spaces!! Uncle Frank still holders a grudge and he’s still not on speaking terms with Adam
> For over a decade, Chrome has supported millions of organizations with more secure browsing – while pioneering a safer, more productive open web for all.
… and …
> Our commitment to Chromium and open philosophy to integration means Chrome works well with other parts of your tech stack, so you can continue building the enterprise ecosystem that works for you.
Per the current version of https://developer.chrome.com/docs/web-platform/deprecating-x..., by August 17, 2027, XSLT support is removed from Chrome Enterprise. That means even Chrome's enterprise-targeted, non-general-web browser is going to lose support for XSLT.
Most people who use xslt like the grandparent described were never using it on the client side but on the server side. Nothing google chrone does will effect the server side.
To clarify: initially, the first web browser evolved from a SGML-based documentation browser at CERN. This was the first vision of the web: well-structured content pages, connected via hyperlinks (the "hyper" part meaning that links could point beyond the current set of pages). So, something like a global library. Many people are still nostalgic to this past.
Surprisingly, the "hyperlinked documents" structure was universal enough to allow rudimentary interactive web applications like shops or reservation forms. The web became useful to commerce. At first, interactive functionality was achieved by what amounted to hacks: nav blocks repeated at every page, frames and iframes, synchronous form submissions. Of course, web participants pushed for more direct support for application building blocks, which included Javascript, client-side templates, and ultimately Shadow DOM and React.
XSLT is ultimately a client-side template language too (can be used at the server side just as well, of course). However, this is a template language for a previous era: non-interactive web of documents (and it excels at that). It has little use for the current era: web of interactive applications.
What makes XSLT inherently unsuitable for an interactive application in your mind? All it does is transform one XML document into another; there's no earthly reason why you can't ornament that XML output in a way that supports interactive JS-driven features, or use XSLT to built fragments of dynamically created pages that get compiled into the final rendered artifact elsewhere.
My only use of XSLT (2000-2003) was to make interactive e-learning applications. I'd have used it in 2014 too, for an interactive "e-brochure", if I could have worked out a cross-browser solution for runtime transformation of XML fragments. (I suspect it was possible then but I couldn't work it out in the time I had for the job...)
If you can use it to generate HTML, you can use it to generate an interactive experience.
XSLT has a life outside the browser and remains valuable where XML is the way data is exchanged. And RSS does not demand XSLT in the browser so far as I know. I think RIP is a bit excessive.
Looks like more of a retro-fun site, than a protest. Most serious websites of 90's had more like light brownish background with black text with occasional small image on the side, double borders for table cells, Times font, horizontal rules, links with bold font in blue color, side-bar with navigation links, bread-crumbs at the top telling where you are now, may be also next-prev links at the bottom, and a title banner at the top.
Game sites and other "desperate-for-attention" sites have the animated gifs all over, scrolling or blinking text, dark background with bright multi-colored text with different font sizes and types and sound as well, looking pretty chaotic.
Professional and serious websites, yes, but there were plenty of websites on Geocities that looked very much like this. These websites may not have been the majority of the internet, but they weren't rare either.
If anything, this retro site is a bit too modern for having translucent panels, the background not being badly tiled, and text effects being too stylish.
Youtube has pretty much always supported RSS and still does. Google killed their RSS reader, but if they wanted to kill RSS they wouldn't put it in their video platform.
When it comes to killing web technology, Google is mostly killing their own weird APIs that nobody ended up using or pruning away code that almost nobody uses according to their statistics.
Can you please clarify? For me, maintaining my own watch lists, that is, per channel RSS feeds, all neatly organized in my RSS aggregator's folders, is the only way to fly.
Got to love the github issue, show exactly the sad state of things. Google owns the internet now and we are all chumps for even thinking there is anything open left.
As a man locked inside of a closet made mostly of XSL, my only regret is that I can't drown it in a bathtub myself.
The XML Priesthood will immediately jump down your throat about "XSL 3 Fixes All Things" or "But You're Not Doing It Correctly", and then point towards a twenty year old project that has five different proprietary dependencies, only two of which even still have a public cost. "Email Jack for Pricing".
And all this time, the original publishing requirement for these stone age pipelines is completely subsumed by the lightweight markup ecosystem of the last decade, or, barring that, that of TeX. So much complexity for no reason whatsoever, I am watching man-centuries go up in frickin' smoke, to satisfy a clique of semantic academics who think all human thought is in the form of a tree.
The RSS argument makes no sense to me. Viewing styled RSS feeds in your browser is not a conventional way to use RSS, and is not what hardcore RSS users actually want (which is, a unified UI for all their news, without any fancy style, and without any place to even put ads). The styled version of an RSS feed, in the rare circumstances it even exists, is specifically for the non-technical users, who will be perfectly happy with a polyfilled or backend implementation.
If you want to keep XSLT in browsers alive, you should develop an XSLT processor in Rust and either integrate it into Blink, Webkit, Gecko directly, or provide a compatible API to what they use now (libxslt for Blink/Webkit, apparently; Firefox seems to have its own processor).
Website is overly dramatic. Google doesn't hate XSLT, it is simply no one wants to maintain libxslt and it is full of security issues. Given how rarely it is used, it is just not worth the time + money. If the author wants to raise money to pay a developer willing to maintain libxslt, Google might revise the decision.
> Full of security issues is similarly overly dramatic
It doesn’t seem dramatic at all:
> Finding and exploiting 20-year-old bugs in web browsers
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
> or $0? Probably not. For $40m/year, I bet you could create an entire company
No sane commercial entity will dump even a cent into supporting an unused technology.
You have better luck pitching this idea to your senator to set up an agency for dead stuff - it will create tens or hundreds of jobs. And what's $40mm in the big picture?
> Google doesn't hate XSLT, it is simply no one wants to maintain libxslt and it is full of security issues. Given how rarely it is used, it is just not worth the time + money. If the author wants to raise money to pay a developer willing to maintain libxslt, Google might revise the decision.
Counterpoint: google hates XML and XSLT. I've been working on a hobby site using XML and XSLT for the last five years. Google refused to crawl and index anything on it. I have a working sitemap, a permissive robots.txt, a googlebot html file proving that I'm the owner of the site, and I've jumped through every hoop I can find, and they still refused to crawl or index anything except a snippet of the main index.xml page and they won't crawl any links on that.
I switched everything over to a static site generator a few weeks ago, and Google immediately crawled the whole thing and started showing snippets of the entire site in less than a day.
My guess is that their usage stats are skewed because they've designed their entire search apparatus to ignore it.
> it is simply no one wants to maintain libxslt and it is full of security issues. Given how rarely it is used, it is just not worth the time + money.
As for money: Remind me what was Google's profit last year?
As for usage: XSLT is used on about 10x more sites [1] than Chrome-only non-standards like USB, WebTransport and others that Google has no trouble shoving into the browser
For me the usage argument sounds like an argument to kill the other standards rather than to keep this one.
Browsers should try things. But if after many years there is no adoption they should also retire them. This would be no different if the organization is charity or not.
> For me the usage argument sounds like an argument to kill the other standards rather than to keep this one.
Google themselves have a document on why killing anything in the web platform is problematic: e.g. Chrome stats severely under-report corporate usage. See "Blink principles of web compatibility" https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...
It has great examples for when removal didn't break things, and when it did break things etc.
I don't know if anyone pays attention to this document anymore. Someone from Chrome linked to this document when they wanted to remove alert/prompt, and it completely contradicted their narrative.
Their products are built on open source. Android and Chrome come to my mind, but also their core infrastructure, it's all Linux and other FOSS under the hood.
Besides, xkcd #2347 [1] is talking about precisely that situation - there is a shitload of very small FOSS libraries that underpin everything and yet, funding from the big dogs for whom even ten fulltime developer salaries would be a sneeze has historically lacked hard.
The thing is, xslt isn't underpinning much of anything, that is why google is removing it instead of fixing it.
Google does contribute to software that it uses. When i say google is not a charity, i mean why would they continue to use a library that is not useful to them, just so they can have an excuse to contribute to it? It makes very little sense.
> The thing is, xslt isn't underpinning much of anything
An awful lot of stuff depends on xslt under the hood. Web frontend, maybe not much any more, that ship has long since sailed. But anything Java? Anything XML-SOAP? That kind of stuff breathes XML and XSLT. And, at least MS Office's new-generation file formats are XML... and I'm pretty sure OpenOffice is just the same.
Let me rephrase that, client side xslt in browser isn't underpinning much of anything. I agree there are more uses in the enterprise world, although i think most of your examples are more XML not XSLT (people really shouldn't comflate the two. XML underpins half the world). I've never heard of anyone using xslt on a microsoft office docx file.
I'd also assume the java world is using xalan-j or saxon, not libxslt.
> i mean why would they continue to use a library that is not useful to them, just so they can have an excuse to contribute to it? It makes very little sense.
They took upon themselves the role of benevolent stewards of the web. According to their own principles they should exercise extreme care when adding or removing features to the web.
However, since they dominate the browser market, and have completely subsumed all web-related committees, they have turned into arrogant uncaring dictators.
> However, since they dominate the browser market, and have completely subsumed all web-related committees, they have turned into arrogant uncaring dictators.
Apple and firefox agree with them. They did not do this unilaterally. By sone accounts it was actually firefox originally pushing for this.
While others may agree with them [1], Google are the ones immediately springing into action [2]. They only started collecting feedback on which sites may break after they already pushed "Intention to remove" and prepared a PR to remove it from Chromium.
[2] Same as with alert/prompt. While all browsers would want to remove them, Chrome not only immediately decided to remove them with very short notice, but literally refused to even engage with people pointing out issues until a very large public outcry: https://gomakethings.com/google-vs.-the-web/#the-chrome-team...
There's a difference between "we agree on principle" and "we don't care, remove/ship/change YOLO"
To be honest, there are two ways to solve the problem of xkcd 2347, either putting efforts into the very small library or just stop depending on it. Both solutions are fine to me and Google apparent just choose the latter one here.
If not depending on a library is an option, then you dont really have an xkcd 2347 problem. The entire point of that comic is that some undermaintained dependencies are critical, without reasonable alternatives.
It is misleading in so far that XSLT is an independent standard [1] and isn't owned by Google, so they cannot "kill it", or rather they'd have to ask W3C to mark it as deprecated.
What they can do is remove support for XSLT in Chrome and thus basically kill XSLT for websites. Which until now I didn't even know was supported and used.
XSLT can be used in many other areas as well, e.g. for XSL-FO [2]
I don't think XSLT was invented for the purpose of rendering XML into HTML in the first place. Perhaps it never should have been introduced in browsers to begin with?
XSLT was invented to transform one XML document to another XML document.
Browser can render XHTML which is also a valid XML.
So it's pretty natural to use XSLT to convert XML into XHTML which is rendered by browser. Of course you can do it on the server side, but client side support enables some interesting use-cases.
There is absolutely nothing to prevent anyone from generating arbirary DOM content from XML using JS; indeed, there's nothing stopping them from creating a complete XSLT implementation. There's just no need to have it in the core of the browser.
You don’t need to generate anything with JavaScript, aside from one call to build an entire DOM object from your XML document. Boom, whole thing’s a DOM.
I guess the fact that it’s obscure knowledge that browsers have great, fast tools for working directly with XML is why we’re losing nice things and will soon be stuck in a nothing-but-JavaScript land of shit.
Lots of protocols are based on XML and browsers are (though, increasingly, “were”) very capable of handling them, with little more than a bridge on the server to overcome their inability to do TCP sockets. Super cool capability, with really good performance because all the important stuff’s in fast and efficient languages rather than JS.
Why not just write an XSLT implementation in JS/WASM, or compile the existing one to WASM? This is the same approach that Firefox uses for PDFs and Ruffle for Flash. That way it is still supported by the browser and sandboxed.
This already exists, and I agree that it's the best solution here, but for some reason this was rejected by the Chrome developers. I discussed this solution a little more elsewhere in the thread [0].
Did a JS polyfill ever go anywhere? There is a comment on https://groups.google.com/a/chromium.org/g/blink-dev/c/zIg2K... which suggests that it might be possible, but a lot has changed. I suspect any effort died with continued availability after the first attempt to kill XSLT.
XSLT was once described to me as "Pain wrapped in Hate", and I fully agree. I'm truly shocked that there is ANY opposition to it's removal and retirement.
This is unfortunate and sad but understandable. Slightly off-topic: a friend dared me to look for a sandbox CSP bypass and I discovered one using XSLT. I reported it to Mozilla few months ago, CVE-2025-8032. https://www.mozilla.org/en-US/security/advisories/mfsa2025-5...
My first graduate job at a large British telco involved a lot of XML...
- WSDL files that were used to describe Enterprise services on a bus. These were then stored and shared in the most convoluted way in a Sharepoint page <shudders>
- XSD definitions of our custom XML responses to be validated <grimace>
- XSLTs to allow us to manipulate and display XML from other services, just so it would display properly on Oracle Siebel CRM <heavy sweats>
This is no way a counter point. You don’t get to be a billion dollar company that can fix XSLT and ignore other libraries without security issues and tell us it’s broken.
Fuck Google you tyrants, all the dissenting opinions in this thread about XSLT are clearly Google employees.
If they have security in mind, they should intend to deprecate and remove HTML. The benefits of keeping it are slowly disappearing as AI content on the web is taking over, and HTML contains far more quirks than XSLT, and let's not talk about aging C codebases about HTML...
I think the problem with XSLT is that it's only a clear win to represent the transform in XML to the extent that it is declarative. But, as transformations get more complex, you are going to need functions/variables/types/loops, etc. These got better support in XSLT 2 and 3, but it's telling that many xslt processors stuck with 1.0 (including libxslt and the Microsoft processors). I think most people realized that once they need a complex, procedural transformation, they'd prefer using a traditional language and a good XML library to do it.
I don't like seeing any backward compatibility loss on the web though, so I do wish browsers would reconsider and use a js-based compatibility shim (as other comments have mentioned).
The google graveyard is for products Google has made. It's not for features that were unshipped. XSLT will not enter the Google graveyard for that reason.
>We must conclude Google hates XML & RSS!
Google reader was shutdown due to usage declining and lack of willingness for Google to continue investing resources into the product. It's not that Google hate XML and RSS. It's that end users and developers don't use XSLT and RSS enough to warrant investing into it.
>by killing [RSS] Google can control the media
The vast majority of people in the world do not get their news by RSS. It's never would have taken over the media complex. There are other surfaces for news like X which Google is not able to control. Google is not the only surface where news can surface.
> Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?
It is quite a reach to say that Google removing XSLT will give them control over government legislation. They are completely unrelated.
>How much did Google pay for this support?
Google is not paying for support. These browsers have essentially a revenue sharing agreements with the traffic they provide Google with. The payments are for the traffic to Google.
The page styling is harkening back to the style of some EARLY early personal amateur niche sites. It reminds me of like Time Cube <https://web.archive.org/web/20150506055228/http://www.timecu...> or like Neocity pages, even TempleOS in the earlier days. T
It's really taking me back, I'm actually getting a little emotional...
Some people used XSLT to style their RSS feeds when displaying them in the browser. An alternative is to use CSS to style the feeds. Personally I don't see why I would want styled feeds.
I have a little bit of skepticism about the move by Google (and you should usually be very skeptical, any time Web standards or Web "security" are talked about, lately), but...
The gaudy retro amateur '95 design of this page might suggest the idea "anyone only cares about this for strange nostalgia reasons".
Content-wise, I think this argument is missing a key piece:
> Why does Google hate XML?
> RSS is used to syndicate NEWS and by killing it Google can control the media. XSLT is used worldwide by [multiple government sites](https://github.com/whatwg/html/issues/11582). Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?
Google wanting RSS/Atom dead, presumably for control/profit reasons, is very old news. And it's old news that Big Tech eventually started playing ball with US-style lobbying (to influence legislation) after resisting for a long time.
But what does the writer think is Google's motivation for temporarily breaking access to US Congress legislative texts and misc. other gov't sites in this way (as alleged by that `whatwg` issues link)? What do they want, and how does this move advance that?
We can imagine conspiracy theories, including some that would be right at home on a retro site with animated GIFs and a request to sign their guestbook, but the author should really spell out what they are asserting.
(IIRC her salary increased something like 10 folds over the past 15 years or so)
Edit: It has jumped from $490k[1] to $6.25M[2] from 2009 to 2024.
Edit 2: by looking the figures up, I learned that she's gone at last, good riddance (though I highly doubt her successor is going to take a 12-fold pay cut)
It's hard enough to get all browsers to agree on standards in the same way, let alone staying on top of removing things that are there and might get discovered as being useful.
It's not dead yet, a new maintainer showed up. But, Google Chrome decided to ditch it, which is fine by me. It was a cluster fuck, similar to libxml2, but even worse.
Poe's Law fully in effect here. Given the 90s-era eye-gouge layout, I can't tell if the author endorses continued support of XSLT or is doing a "Modest Proposal"-style satire by conflating those who support continued native implementation of XSLT with those who pine for the days when most of the web looked like this.
This is about forcing everyone into Json. Incredibly sad the amount of “just take Google’s word for it” in this thread. We have truly lost our way as a tech embraced society and eschew reason.
There is a reason the lead Google engineers initials are “MF”.
I know that XSLT can be implemented in JS (and I have used Saxon-JS, its good!) but the loss of functionality for the XML processing instruction will be a shame.
There is nothing like in the modern web stack, such a pity.
Given that XSLT transforms XML into HTML, why has no one simply built a server side XSLT system? So these existing sites that use XSLT can just adopt that, and not need to rely on browser support.
I remember Gentoo Linux had all its official documentation in a system just like that, maybe 15-20 years ago. It was written and stored as XML, XSLT-processed and rendered into HTML on the webservers.
I want to use it on an RSS feed: to make it sensible when a new users clicks on an RSS link.
I specifically want it to be served as XML so it can still be an RSS feed: I don't even need the HTML to look that great: I have the actually website for that.
Server-side XSLT tools have existed for 25 years or so. The people complaining about this want existing websites using XSLT on the client to continue to work without changes.
It's actually possible to support it by re-implement it by js or compile to wasm and running on client side. There are extensions to support pdf(pdf.js), flash(Ruffle), mht(UnMHT). So it should be possible to do the same thing for XSLT. The real question is "Who want to"? Does xslt have a large user base like pdf, flash, mht?
Also: "the needs of users and authors (i.e. developers) should be treated as higher priority than those of implementors (i.e. browser vendors), yet the higher priority constituencies are at the mercy of the lower priority ones": https://dev.to/richharris/stay-alert-d
Many years ago, I was leading a team that implemented a hyperfast XML, XSLT, XPath parser/processor from the ground up in C/C++. This was for a customer project. It also pulled some pretty neat optimizations, like binary XSLT compilation, both in-mem and FS caching, and threading. On the server-side, you could often skip the template file loading and parsing stages, parallelize processing, and do live, streaming generation. There was also a dynamic plugin extension system and a push/pull event model. The benchmarks were so much better than what was out there. Plus, it was embeddable in both server and client app code.
Would have been great if it had been open-sourced, but they paid for all the development and owned the codebase. They wanted to use it to dynamically generate content for every page for every unique device and client that hit their server. They had the infrastructure to do that for millions of users. The processing could be done on the server for plain web browsers or embedded inside a client binary app, so live rendering to native could be done on-device.
Back then, it was trivial to generate XML on-the-fly from a SQL-based database, then send that back, or render it to XHTML or any other custom presentation format via XSLT. Through XSD schema, the format was self-documenting and could be validated. XSLT also helped push the standardizing on XHTML and harness the chaos of mis-matched HTML versions in each browser. It was also a great way to inject semantic web tags into the output.
But I always thought it got dragged down with the overloaded weight of SOAP. Once REST and Node showed up, everyone headed for new pastures. Then JS in browsers begat SPAs, so rendering could be done in the front-end. Schema validation moved to ad-hoc tools like Swagger/OpenAPI. Sadly, we don't really have a semantic web alternative now and have to rely on best guesses via LLMs.
For a brief moment, it looked like the dream of a hyper-linked, interconnected, end-to-end structured, realtime semantic web might be realizable. Aaand, then it all went poof.
TL;DR: The XML/XSLT stack nailed a lot of the requirements. It just got too heavy and lost out to lighter-weight options.
To make the web safer, they will replace simple static web pages with remote code execution on the user's machine. Yet another “fuck you” to people who don't want to shove JavaScript in everything. God forbid I serve a simple static site to people. Nonono. XSLT is fantastic for people who actually want to write XML documents like the good old days, or add styling to Atom feeds.
Edit: and for a slightly calmer response: Google has like, a bajillion dollars. They could address any security issues with XSLT by putting a few guys on making a Rust port and have it out by next week. Then they could update it to support the modern version in two weeks if it being out of date is a concern. RSS feeds need XSLT to display properly, they are a cornerstone of the independent web, yet Google simply does not care.
It's truly troubling to see a trillion dollar corporation claim that the reason for removing a web browser feature that has existed since the 90s is because the library powering it was unmaintained for 6 months, and has security issues. The same library that has been maintained by a single developer for years, without any corporate support, while corporations reaped the benefits of their work.
Say what you will about how this is technically allowed in open source, it is nothing short of morally despicable. A real https://xkcd.com/2347/ situation.
It would cost Google practically nothing to step up and fix all security issues, and continue maintenance if they wanted to. To say nothing of simply supporting the original maintainer financially.
But IMO the more important topic within this whole saga is that libxml2 maintenance will also end this year. Will we also see support for XML removed?
> Say what you will about how this is technically allowed in open source, it is nothing short of morally despicable. A real https://xkcd.com/2347/ situation.
> But IMO the more important topic within this whole saga is that libxml2 maintenance will also end this year. Will we also see support for XML removed?
No, because xml has meaningful usage on the web. The situations are very different.
> Similar to the severe security issues in libxslt, severe security issues were recently reported against libxml2 which is used in Chromium for parsing, serialization and testing the well-formedness of XML. To address future security issues with XML parsing In Chromium we plan to phase out the usage of libxml2 and replace XML parsing with a memory-safe XML parsing library written in Rust
Perhaps there are some Rust gurus out there that can deliver a XSLT crate in a similar fashion, which other folks can then integrate?
The problem seems to be that the current libxslt library is buggy due to the use of C++, an unsafe language (use after free etc.).
[BTW, Chris Hanson's old book "C: Interfaces and Implementations" demonstrated how to code in C in a way that avoids use after free: use pointers to pointers instead of pointers and set them to zero upon free-ing memory blocks; e.g.
Just fork out an OpenChromium branch that adds in the new implementation. Whoever will want to remain compatible to open Web W3C recommendations can develop that branch.
Agreed. Because this decision has nothing to do with safety or low usage, like they claim. It's just another example of a corporation abusing their dominance to shape the web according to their interests.
For me, it happened in that moment when XMLHttpRequest was the only working common denominator for few "new" techniques browsers
- as iframe was over everything you couldn't just load some content into target like in Netscape
- but you had to use JS anyway after that to move it out of first plane.
Because I wasted my time to reach some working ways to get the results by scripting,
it leave me no time actually to think about it in any other way (like to prove for it the next few things I saw coming to the client side soon after, which I used to know from earlier thanks eXist-db).
I took me some time, much later, to learn about such few incredible things - that if working, would make my job so.. basic - just, if, again few things described as bugs, were fixed at that time.
Without that, just that happen: you wanted the results you have code it yourself - regarding or regardless of few bugs making simple things being hard corner cases with interoperability problems that can't be solved.
Since then, I understand that with JavaScript it's just easier to keep fixing things ad hoc not worrying to much about standards, implementations
.
- than, actually to keep asking for few things or key bugs to be fixed, for more than 20 years - and to not see that ever.
.
The legacy is that, we can no longer get there where simple things can just interoperate (is it old school now ?) - but some generation later actually not aware why, has such imperative mindset of micromanagement that they can not even imagine self not implementing repetitively something just because in some other world after long way it was already abstracted once - but just not ever implemented once to work in same consistent way and as intended between browsers.
From that point of view it's quite easy to not worry about or to abolish standards - you can't do much about implementations elsewere or bugs - but you can do whatever you want with your code (so long no one will remind you - will it last when other things change ?).
That's sad actually, as I se it, that Javascript Document Programmers keep repeating and will be repeating same works, unaware of reasons for that - few bugs here and there, for 20 years not fixed once or in same common way.
But how "random" were all that things leading to that point: with JavaScript all is possible and everything else is redundant ? ( only a hammer can work ?)
- then look at example: https://news.ycombinator.com/item?id=45183624 - what's there look like simplest abstract form and what's like redundant ?
Wow you got negged so hard, likely by people that have never really written XSLT code.
I have and I've always hated it. I still to this day will never touch an IBM DataPower appliance, though I'm more than capable because of XSLT.
They (IBM) even tried to make it more appealing by allowing Javascript to run on DataPower instead of XSLT to process XML documents.
It's a crap language designed for XML (which is too verbose) and there are way better alternatives.
Javascript and JSON won because of their simplicity. The Javascript ecosystem however (nodejs, npm, yarn etc) are what take away from an otherwise excellent programming language.
Please kill it, and then let's sit on a table with all adults people and decide what else should be killed. Maybe specify a minimum subset of modern feature a browser must support, please let's do it, it could light on again browser competition, projects like lady browser should not implement obscure backwrads compatible layout spec... What about the not modern web sites? The browser will ask to download an extra wasm module for opening something like https://www.spacejam.com/1996/
reply