Code splitting can be a nice optimization but it can also be a lot of effort for little gain as it is in our case. It is an optimization for the few times a user hits our app with a stale or empty cache. And we have no mobile users, this is an enterprise analytics app.
We do not grow organically by people stumbling on our app and thinking "wow that was fast". We go through months of enterprise sales process to ink a deal, then onboard maybe 20 key users at the company.
To put the effort into code splitting would be purely an exercise in keeping up with the new hotness. That's not to say we don't keep a close eye on the package size, just that it's not much of a great optimization for a regular user's experience in our case.
Also serving all assets from the same domain saved us some time in domain resolution.
The last part, absolutely yes. CDN's are obsolete, although they'll jump through all sorts of hoops to convince you that's not the case.
As long as your service uses HTTP/2 it's far more efficient from DNS and multiplexing/TCP/TLS handshake standpoint to serve from your own domain. And better for security most of the time since hardly anyone uses CSP and hash signatures for their third party scripts.
The original sell of CDN was that everybody would have the same libraries cached. With the massive poliferation of JS you would have to have a 100 gig cache for that to be remotely true.
A couple years back I moved a C# app to .NET core for the HTTP/2 support. I tried removing the ~4 external CDN dependencies just to see what happened. Load speed improved around 30% because no additional DNS lookups and TCP window stuff worked around by multiplexing.
An aside, try not to use multiple subdomains. They trigger DNS lookups and don't work as well with CORS. It's easy to accidentally trigger cors and a bunch of meaningless round trips by using different subdomains
I think you are conflating general Content Delivery Networks with “shares JavaScript library repositories that happen to use a CDN”. While the “it’s a shared repo of common JS files so it’s already cached” idea never really worked (you are right about the added cost of DNS + tcp/TLS) general Content Distribution Networks absolutely provide performance benefits by delivering your static (and optionally dynamic) content from edge nodes that are much geographically closer to the visitor. Usually these CDNs front the entire site origin, so you dont have extra dns/overhead for subdomains like shared JS repos.
(I work with many IR Top 100 retailers, and I’ve helped to build the dashboards comparing edge vs origin. It’s valuable even for sites where the majority of the visitors are in the US, and especially so if you have a substantial international audience)
When they front the origin there's definitely an advantage, I agree.
But that implies:
You allow a CDN to host your initial asset, possibly a security risk
You only use that one CDN for most/all non-dynamic content.
When latency becomes that important I would rather host my own AS. Giving CDN the origin and your first content load is effectively handing off your entire site security to a third party.
At that scale you could easily and cheaply run multi-homing on your own AS in maybe 10 colos across the world. Maybe 100k a year to eliminate third party risk. Maybe worth it? I think so
Looking at our metrics, we have over a 95% cache hit ratio using CloudFront. We use it to store large JS libraries, think Excel, table plugins, HTML editors, and it works great. It keeps the pressure off our origin app server and we version each release of the vendor code, so it rarely changes. It also helps that our users share the same office space, so its always cached for the 20+ users in the office. The phrase "CDN's are obsolete" is not true, it is not a simple true/false statement but a complicated one.
HTTP/2 is not always faster depending on if your link connection has any loss to it (wifi, spotty 4g).
Cache lookups are cheap, if these users are all in the same office you would be much better off just storing everything at a close by datacentre. By using hash fragments you can prevent lookups completely and get the same hit rate as the CDN without using one at all
> Code splitting can be a nice optimization but it can also be a lot of effort for little gain as it is in our case.
We came to the same conclusion when investigating as well, but because of the structure of our site - we had just a handful of layout primitives that were reused across the site so page splitting had no benefits because every page used the same code anyway.
Still an optimisation to look into if you can benefit from it!
Ugh.. does anybody else have this feeling that no matter how fast JavaScript gets, average web app performance will not change at all? Kind of like with risk compensation principle - the safer your gear, the more risks you take on, we’re in the same spot with web/electron apps.
JavaScript vms got so much faster than 10 years ago and yet all the web sites are much worse. Memory and cpu hogs.
This is not something improving the vm can fix. There just is no competition so that customers could send a feedback signal saying that performance is unacceptable.
As VMs get faster, people will stuff more JS in pages, just because it's doable.
Some things were not even possible and were "unlocked" by browser optimizations.
For example: writing games or complex animations with JS and the DOM was nearly impossible (that gap was filled with flash). As browsers got faster, the need for flash just went away.
Also the more APIs the browsers ship, the more is possible to do with JS, and so the pages ship more JS to use these new featuers. [1]
For example: after browser notifications became broadly available, almost _all_ pages ship now a snippet of JS code to annoy the user with notifications. I'm sure such fads blow up the average size of bundled JS across all web pages.
Yep. Unless a company actively engages clients to get product feedback, it's unlikely that the end users even have the clout to complain about app performance until a vendor improves it.
There are times that I perceive the community trying to convince itself that web devs can have their cake and eat it too. That it's not just possible but easier to built performant, accessible and maintainable apps in React/Angular/Vue, when that's just not a universal truth. Sure, those tools may make certain aspects of development easier through the abstractions they leverage, but those also come with a cost (i.e. breaking API changes seem to be in vogue these days).
Ultimately, this part in parcel of the JS community. Aspects of the ecosystem feel so fragile (NPM, framework/lib churn, etc.) that it's easy to be cynical about web app performance.
Something I've wondered is if consumers have forgotten what fast, responsive UI feels like or that because so many people used bargain desktops/laptops, they never knew. Thus, slow visually cumbersome software is the norm for them...
Years ago when the debate was natively compiled apps vs java/c# apps, hardware was still slow enough that you could reasonably justify using c++ or c or something instead. Then, hardware got "fast enough" that, generally, that didn't matter.
We are fast approaching a time when hardware will not be able to save us so I believe we'll eventually see devs have to slow down and optimize better.
> Ugh.. does anybody else have this feeling that no matter how fast JavaScript gets, average web app performance will not change at all? Kind of like with risk compensation principle - the safer your gear, the more risks you take on, we’re in the same spot with web/electron apps.
> This is not something improving the vm can fix. There just is no competition so that customers could send a feedback signal saying that performance is unacceptable.
I think that's part of the reason Google made AMP: they don't allow arbitrary JavaScript scripts because it's not practical to get all websites and all developers to optimise JavaScript usage. Likewise, the amount of CSS code allowed is strictly limited.
Whenever I test website optimisation (basic websites, no fancy-shmancy stuff, not web-apps) the slowest parts, least accessible too, are advertising and tracking, and social.
Google tells me the code to add to my page (for G+, advertising, analytics), then things like Google Lighthouse tell me don't do X, Y, Z that are all done by the code Google told me to use.
Minimise request size, leverage browser caching, defer parsing, ... they don't even minify? At least they enable gzip, Amazon ads don't even do that.
Load show_ads.js asynchronously ... well why not tell me to do that up front rather than after the fact in PageSpeed?
> Load show_ads.js asynchronously ... well why not tell me to do that up front rather than after the fact in PageSpeed?
Google is a huge company and not every team has the same priorities is the most likely answer. It's not realistic to expect they're going to optimise the loading speed of literally everything they release.
Web best practice checks will always have some degree of false positives as well as websites are so complex.
I wrote my own Chrome extension that checks websites for best practices (https://www.checkbot.io) and had to weigh up how generally useful a test was compared to its typical false positives rate. The website for the product does pass the vast majority of the tests though, but I've only got a single website to manage compared to Google.
"This page has 25 external Javascript scripts. Try combining them into one."
"This page has 3 external stylesheets. Try combining them into one."
This is obsolete with HTTP/2 because files can be downloaded in parallel. Combining the files will also increase the weight of pages that don't need every file.
>It's not realistic to expect they're going to optimise the loading speed of literally everything they release. //
How about just the things that get duplicated on more than 10Million websites?
>By the way, isn't some of this advice outddated? //
Yes, the link was just supposed to be broadly indicative. Google signpost Lighthouse as the primary tool now rather than PageSpeed I think but you can't link to that on the web. [And even Lighthouse has false positives, but that's not the issue here.]
What do you mean there's no competition? This feels like such a weird statement when there's competition in just about every market segment on the web. If you mean there's no competition where the differentiating feature between competitors is performance then well sure, that's a lot rarer.
For user-facing applications performance only really matters in human time, and is pretty far down on the list of how people choose software over things like features, price, ease of use, etc.. Until performance becomes a big problem because the app/site becomes unusable it's basically no problem.
Hackers might not like it but we're such a weird market to sell to. Not necessarily that performance is a weird preference but that by in large hackers are perfectly fine with woefully inefficient feature-packed apps for their "primary" apps like their IDE but then want super lightweight skeleton-featured experiences for everything secondary.
Performance is crucially important for a lot of applications. It profoundly affects the usability of a site if performance is too slow, and many studies over the years suggest direct losses on the bottom line as a result.
It may be a controversial opinion, but I think part of the problem today is that young web developers look to people who work at high-profile places like Facebook and Google as role models and for examples of best practice. And yet, a lot of Google's and Facebook's own web properties are... less than exemplary... in terms of performance, usability, design, and other factors that are important in most situations, and have been getting steadily worse rather than better over time. I humbly submit that the reasons for the runaway success of these giants have really very little to do with the quality of their sites any more, and that up to a point they can get away with things because of their dominant positions and lock-in effects that simply wouldn't be acceptable for most of the rest of us.
I think a lot of this is technical debt. They built the feature/page/whatever code and it works "good enough".
Now a future programmer three years later tells the business manager that they can do this much better now for $x. It will improve nothing as far as the user in concerned. Maybe the page will load faster but the user may not even notice. Maybe they will save some money on hosting and bandwidth costs.
In most cases, the business person will most likely tell the programmer to keep working on to new feature z2021 project and leave the old code alone. This is until there is a security problem or the old code effects the z2021 project.
> For user-facing applications performance only really matters in human time, and is pretty far down on the list of how people choose software over things like features, price, ease of use, etc..
There was a study that has shown correlation between revenue drop and milliseconds that page takes to load. It’s not only hackers that respond to performance.
As for competition - due to network effects there are very few competitors to things like Facebook, Amazon and other big tech.
If these were federated services participating in communication built around some gigantic independent social graph - that would enable more competition in terms of UI performance, features, etc. Not going to happen anytime soon though.
I feel like this is similar to the induced demand argument: "If you provide more supply then people will just (mis)use it and you won't have excess anymore so it's not really worth it"
And the implicit response is that people using available resources for whatever they want is fundamentally a good thing.
In the context of computers, though, I want to decide what to spend the RAM on, it’s not great when random third parties decide to take advantage of my RAM just because it’s there.
If you look at why the bundles are so big, the frameworks are so large etc., you’ll realise it all comes down to fighting browser deficiencies:
- no declarative APIs for DOM updates
- no good APIs to do batch updates
- no native DOM diffing (so decisions on what to re-render have to be done in userland)
- no DOM lifecycle methods and hooks (you want to animate something before it’s removed from DOM, good luck)
- no built-in message queueing
- no built-in push and pull database/key-value store with sensible APIs (something akin to Datascript)
- no built-in observables and/or streaming
- no standard library to speak of
- no complex multi-layered layout which matters for things like animations (when animating a div doesn’t screw up the whole layout for the entire app/page)
etc. etc. etc.
As a result every single page/app has to carry around the whole world just to be barely useable.
> As a result every single page/app has to carry around the whole world just to be barely useable.
So pages before this javascript bloat became commonly accepted were unusable? Maybe at some point web developers have to accept that the web was simply not built for the things they're trying to do with it, and that comes with a cost. Maybe they should evaluating whether they _really_ need animations on everything, whether everything _has_ to be a SPA made in [current popular framework]. I'm not even saying these things are bad, they absolutely do have value, but that value has a tradeoff, and usually that tradeoff is placed on your customers.
> Maybe at some point web developers have to accept that the web was simply not built for the things they're trying to do with it, and that comes with a cost.
So what's your proposal? To have one browser for "web documents" and another just for those things built by web developers that you dismiss with straw men, yet still acknowledge as having value?
If they have value, but they incur a cost, shouldn't we be looking at why we have that cost and how to drive it down? That's exactly what the post you're replying to is trying to convey.
Yes, not everything has to be animated or an SPA, because some things are good enough as just simple "web documents". But others gain in usefulness, usability and cognition by being augmented (e.g. data visualisation, business apps, games, et al). Not every web content is created for the same purpose. We still end up with the same crippled DOM/JavaScript combo. That should be the focus of the conversation.
The parent’s point is that, if all these things were browser JS runtime features, there’d be no tradeoff to be made. They’d be “free.”
If every JS page in the world today includes the same line of code, isn’t it obviously the fault of browser makers for not making that line of code part of the JS prelude and thereby making it “free”?
That's fine, and if they do end up being part of the JS runtime, then great! But right now they don't, so by including those you are actively having to pay the cost of doing so. My point of contention with the parent post is that these features are requirements for a webpage "to be barely useable". You can absolutely make great websites and webpages without these.
An OS? An exaggeration maybe, but only slight. The browser is pretty much its own environment already. It's been the case for decades. It can display text, process a variety of media, and run programs (written in JavaScript). The main cause of frustration at this point is that, compared to other popular programmable environments (C, PHP, Python, node.js, etc), where some real focus is put into evolving the language along with a variety of primary development tools, people in charge of the browser's ecosystem seem to still be coming to terms with the programmable part of its identity.
All the dilly-dallying results in community efforts that pile on top of each other to create all this bloat that is carried from one tool to the next framework.
We really do need animations on everything, SPAs, etc. Having a less pretty looking site makes us appear much less trustworthy to non-technical customers, who don't care that their browser was not originally created for our online storefront. I imagine that's the same for pretty much every other e-commerce site.
> So pages before this javascript bloat became commonly accepted were unusable?
They weren't unusable. They, too, were barely useable in any scenario outside of a static HTML page with static images. There's a reason why jQuery was (and probably still is) the most popular Javascript library. You don't have to go too far to see what people were doing before "js bloat", just look at ExtJS [1]. They would have loved to have the "js bloat" 10 years ago.
I think one major issue with this is that ES basically can't break backwards compability, ever. Observables are a good example, actually. If we look at RxJS, which is sort of the inspiration behind observables I guess, we'll see that they are now at 6th major version of the library. Migration from 4th to 5th made bunch of renamings and other breaking changes, which just wouldn't work in ES. Migration to 6th includes an additional compability layer, which again would be permanent piece of bloat if it was in ES standard.
And all this while some other HNers are saying that the browsers are too complicated and its so sad that there won't ever be completely new independent browsers... :)
You are correct. I was thinking of Object.observe which was preached as the next big thing in JS only to finally be pulled from Chrome (around version 50 IIRC).
That said, does a native observable actually buy that much performance-wise? If not, then perhaps an addon library is better. Once a library makes it into JS, it is there forever. That's a big risk when decent libraries already exist.
I agree that the browser has many deficiencies for the modern front end developer, but OTOH there is no real need to make huge bundles.
As an example look at Svelte. It compiles down to super efficient imperative code without sacrificing the dev experience. A hello world weights like 5kB gzipped I believe.
Actually, the minimal Hello world example in Svelte is 2.7 kB even before gzip. It's really awesome and I already used it in one production app, the results were pretty amazing performance wise.
Besides what snek mentioned the whole CSS Houdini effort is about providing better, finer grained tools to be able to do/control partial updates.
That directly of course doesn't help with bundle sizes, but it means bundles don't have to carry so much logic and optimizations because they can just use the already performant APIs.
Note that you don’t really have to “animate something before it’s removed from the DOM”.
A “disappearing” animation is an embellishment and a purely visual effect; therefore, it only depends on what the original object looked like, not the entire original object itself.
It is a lot simpler to reason about the state of your system if you delete items immediately and create proxies to handle special effects for deletion.
For example, to have an element “animate away”: create a new purely-visual element at an absolute location that looks like the original object, delete the original object immediately, and then let the proxy go away whenever it is done animating. This completely frees you from having to worry about the true lifetime of the original object.
> For example, to have an element “animate away”: create a new purely-visual element at an absolute location that looks like the original object, delete the original object immediately, and then let the proxy go away whenever it is done animating. This completely frees you from having to worry about the true lifetime of the original object.
The OPs point is that doing this is far more DOM-intensive, and leads to worse performance. It isn't a good solution.
> For example, to have an element “animate away”: create a new purely-visual element at an absolute location that looks like the original object, delete the original object immediately, and then let the proxy go away whenever it is done animating. This completely frees you from having to worry about the true lifetime of the original object.
Yeah, and to properly do that you need that very same JS bloat to:
- somehow observe a DOM element being destroyed (there are no lifecycle methods on DOM objects)
- somehow figure out if it's the object being destroyed, or it's parent, or grandparent, or...
- somehow quickly create a visual proxy instead of the object being destroyed, quickly substitute it in the DOM (avoiding repaints, reflows and jank). With all the correct things for it: size, position, scroll position, all internal representation (let's say we're animating a login form floding in on itself)
- somehow animate that proxy object
- and then remove that proxy object.
It's a lot simpler
By the way. If I remember correctly, just getting the position of an object causes a full-page reflow [1]
It's not "wrong" in the sense of being intrinsically unsafe, but IMHO it's a bad habit which can tempt to insecurity in the end.
When working with external files a programmer almost has to work with the framework, and once (s)he does that the framework - well, a modern framework - would take care of the security details and do it right.
When working inline it's too easy to add a script tag manually, and from there it's a bit too easy for someone in the team to miss something (write a js without the hash/nonce and not notice the warning) or talk him/herself down to lowering security ("importing this 3rd party js is too hard, lets use just a nonce and forget about hashes", "this policy is too constraining, it's just a SMALL script, no risk here").
When working in a team, it's much better to have a hard and fast rule which forces everyone to work right. There's really no reason to use inline when using external files works really well now - and is apparently better for responsiveness too.
Note however that there are interactions between using a nonce and caching which require caution (since nonce is supposed to be used only once but caching can work against that), so proper protection here has a cost in complexity and/or speed.
It's interesting that they benchmark Facebook and Reddit, two sites I find significantly more laggy than, say, Google. Are there choices in how we build sites that are more important than how V8 can optimize things?
Are you comparing the Facebook news feed to the Google home page? Is that a fair comparison, considering the news feed does a lot more than loading static assets? Unless you were comparing the News Feed to say, logged in Gmail.
- It's very easy to find JS programmers. There's a _lot_ of them. But programmers proficient with pure functional programming languages are harder to come by.
- JS is natively supported by browsers and it is pretty much guaranteed that the code you write is going to work for ever. Elm on the other side, who knows? It might lose steam and go into support mode, or drop support, or maybe they introduce breaking changes in a future version. JS is a much safer bet.
- Writing Elm does not guarantee a good polished product. You can write bad software in good languages and vice-versa.
That being said, it's only a risk. Maybe your team is more comfortable with Elm and they get more productive. Maybe the language design makes easier writing code with less defects. Maybe it actually gives you an edge.
It is hard to say if Elm brings value to your business. It most likely depends on what kind of a business that is (web agency than cranks 3 visit-card websites a day? Or is it one developing a complex app?)
In any case, managers get very nervous about languages that are not mainstream. And they do have good reasons.
> But programmers proficient with pure functional programming languages are harder to come by.
I used to get a similar argument when pushing Python over Java, and my counterargument is the same: I wouldn't hire a JS programmer who was afraid or unwilling or unable to learn Elm. I would bet that any normal person smart enough to solve Sudoku problems could learn Elm, I wouldn't make that bet with JS/CSS/HTML.
> It might lose steam and go into support mode, or drop support, or maybe they introduce breaking changes in a future version. JS is a much safer bet.
But who is using just JS these days? Frameworks and transpiling are the order of the day, no? Same argument applies to them. And JS is a moving target too. As for "breaking changes", well, it is just version 0.19 so far. Once they hit 1.0 I would bet on Elm being more stable than JS+Whatever.
Weigh this against zero front end bugs. That's a staggering (if hard-to-quantify) cost savings.
> Writing Elm does not guarantee a good polished product. You can write bad software in good languages and vice-versa.
Yeah but that's not an argument in favor or against any particular language, and in the specific case of JS vs Elm I think it's clear Elm wins. If your developers are truly crap then, yeah, Elm won't save them. But then you also have bigger problems than what tech stack to use, eh?
If you started with Elm then JS/CSS/HTML+Frameworks|Transpiled-langs et. al. would, I imagine, seem like a massive amount of stuff to learn, eh?
I think the Elm-first programmer would be right to ask, "What's the payoff? What awesome new powers do I gain, impossible with Elm, as reward for all this blood sweat and tears?"
(Elm is wildly easier to debug than the status quo stuff. The compiler is awesome for that. So the hypothetical Elm-first programmer has to learn not just the status quo stack but also how to debug it!)
So, again, it seems to me like it behooves the cost-conscious programmer to carefully examine the cost/benefit ratio of non-Elm-implementable features in light of the availability of Elm.
I am presupposing, like I said above, that you can take pretty much any member of the Sudoku-solving public and train them up in Elm in a few weeks. Even paying them at parity with JS devs, the time and effort savings of Elm vs. JS et. al. would be substantial, I believe.
(Personally, I would want at least two or three people on the team who knew what was going on under the hood.)
So you're ok with an elm programmer not willing to learn JS, but not ok with the reverse. That just sounds like good old fashioned tribalism; and personally I'd never hire a tribal programmer, whether that tribalism is about a great language, or the latest coolest JS framework.
What I'd look for is an open, sharp mind. Willing and happy to learn new things and paradigms, even those that go beyond their comfort zones, whether it is JS, Elm or Java. Someone to whom programming languages, frameworks and libraries are just TOOLS to accomplish a certain goal, and not part of their identity.
I'm not sure where you're coming from, but when I'm looking for a job, I pay attention to the tools that companies have picked, and if I get the chance to ask them, why. A company that picks good tools is a lot more attractive to me. All other thigns being equal, if I had to choose between a company that used JS, and one that used JS and one that used Elm, I would prefer the Elm one. I've never written a line of Elm before.
I'm willing and happy to learn new tools. I'm happy with hiking in flip-flops if that's all that's available, but I'd certainly choose hiking boots if given the choice. I suspect most "good" developers have the same attitude.
> So you're ok with an elm programmer not willing to learn JS, but not ok with the reverse.
That's not exactly the spirit of what I meant, but, yes.
> That just sounds like good old fashioned tribalism
I can't really control what it sounds like to you, can I?
My whole point is that Elm is (or one day soon will be) much superior to JS et. al. This is a cost/benefit analysis, not tribalism. (I'm not an Elm fanboy. I don't actually like it that much, in fact.)
It's puzzling that more JS programmers don't adopt Elm.
It's reasonable for an Elm developer to ignore JS to the extent possible.
That's the point of Elm, eh?
> What I'd look for is an open, sharp mind. Willing and happy to learn new things and paradigms, even those that go beyond their comfort zones, whether it is JS, Elm or Java. Someone to whom programming languages, frameworks and libraries are just TOOLS to accomplish a certain goal, and not part of their identity.
Sure! Me too. But now I can make the scarcity argument, no? And you've got to pay those folks well, eh?
I'm postulating that you can take normal people (smart enough to solve Sudoku) and have them productive in Elm within a month or so. You probably would not have to pay them as much as an equivalently-productive JS programmer, and certainly not as much as the kind of person you described.
Again, I'm talking about the business value of a given software production tool not the entertainment value or the personal-growth value for the devs.
> My whole point is that Elm is (or one day soon will be) much superior to JS et. al. This is a cost/benefit analysis, not tribalism.
I'm sorry, but your enthusiasm for Elm might be blinding your reasoning. JS is an EXTREMELY flexible languange. Open and malleble to both experts and newbies alike. Open to all styles and paradgims of programming including object and functional, which is what makes it possible to be the worlds most compiled to languange (https://github.com/jashkenas/coffeescript/wiki/List-of-langu...). Elm comes nowhere close to beating the JS cost benefit analysis.
In the end, the business value always wins out. Elm, started in 2012 is even older than react, and would've took of long ago if it had any business value.
> It's puzzling that more JS programmers don't adopt Elm.
JS developers come from diverse backgrounds (expert, newbie, functional, object, procedural etc). The ones that like elms approach will adopt it, those that don't will use a different style/compiler/transpiler; and it all ends up as JS. That's flexibility at work.
> which is what makes it possible to be the worlds most compiled to languange
Okay now that's just ridiculous.
There are no other native scripting languages that the browsers understand. Of course it's going to be the most compiled to language, doh.
Your statement is along the lines of "all Earth life breathes oxygen, this has to mean something".
Well of course it means something. It means that our planet doesn't offer alternative biomes. It doesn't mean that oxygen-based life is ideal (which it isn't).
JS is a legacy everybody is stuck with. That doesn't make it good. It only makes it a safe choice because it's widely used and has a larger pool of programmers as you said.
Most businesses are followers. If an universally better technology makes strides you can be damn sure a lot of those follower businesses will be all over it after Facebook or Google adopts it. Nothing new, businesses always seemed to operate like so.
Not sure what value do your comments bring except for stubbornly reasserting the state of our currently (and highly suboptimal) reality. Practically almost every non-junior programmer knows how we arrived here.
And finally, the better business value of JS [devs] is far from given. It's a local maximum and nothing else. I've seen teams replace 5 average programmers with 1 senior, save 40% expense in doing so, and have much more maintenable code with much less support costs. These things do happen even if they are not the majority. And they will keep happening until one day somebody like you will lecture people how "JS was never bound to succeed".
You should realise you are only reasserting current reality and are doing post-hoc rationalisations in an attempt to understand the said reality. This is completely fine; we all try to make sense of the world. Reality however is always in flux. Keep an eye out. ;)
As I said, I'm not an Elm fanboy. I don't actually like it that much.
> JS is an EXTREMELY flexible languange. Open and malleble to both experts and newbies alike. Open to all styles and paradgims of programming including object and functional
Sure. So what? So is LISP. I'm talking about "business value", meaning: if you don't care about how shiny and flexible and open your programmers' languages are, you care about the hard $$$ cost of development and maintainence per unit of webapp functionality delivered, then Elm makes a heck of a lot more sense than JS+etc.
The compiler and the fact that you get zero front-end errors just blows away the typical JS dev cycle with a buggy front-end. I can't quantify it because you can't quantify the hypothetical lost business due to hypothetical bugs that don't happen because you're not using JS.
To me it seems clear that it is much cheaper to develop and maintain Elm apps.
> what makes it possible to be the worlds most compiled to languange
Nah. That's because it's the only language in the browser and people would rather use something else.
> Elm, started in 2012 is even older than react, and would've took of long ago if it had any business value.
You're begging the question I think, my whole puzzlement is that the business value seems very clear yet Elm hasn't taken off.
To date, the only serious objection I've heard here and elsewhere is that they removed older docs when they bumped a minor version number. That sucks.
> you care about the hard $$$ cost of development and maintainence per unit of webapp functionality delivered, then Elm makes a heck of a lot more sense than JS+etc.
It doesn't, sorry. JS resources, developers and libraries are atleast x1000 more than Elm. business is about supply and demand, more supply == cheaper cost. That's business 101. That's why there are hardly any Elm jobs advertised if compared to JS. You keep talking about how Elm makes better business sense, but the reality of it is completely different. Maybe all those employees are just making the wrong business decision, right? Wishful fanboy thinking. You are even in denial about liking Elm, which is pretty obvious to anyone reading your comments about it.
> Nah. That's because it's the only language in the browser and people would rather use something else.
Something else like dart, flash, VB, Java... oh wait! They've all been used in the browser before, none passed the test of time. And Elms approach (transpiling to JS, CSS, HTML) is not unique either, e.g. TypeScript and its not exactly shinning against competition in its league either. Once it beats competition there, then maybe it can start taking aim at JS, CSS and HTML.
> You're begging the question I think, my whole puzzlement is that the business value seems very clear yet Elm hasn't taken off.
It seems very clear to you, because you like and have invested in the language so much. A less invested "business person" will have a more objective view of which of the two (JS vs Elm) makes more business sense.
In all fairness the functional paradigm isnt exactly all too hard to grasp (when it comes to what you need to be effective at least, you dont need to understand what a monad is, I still dont!). I started learning Clojure completely cold going through Exercism and it took me maybe 2 weeks of doing a problem a day (not much work at all) before I felt like I started to "get" FP. In fact, I would say it's even easier in a lot of cases because FP is a much "purer" skill set- any imperative language still uses functions, and arrays, and switch statements, and overloading. In FP, you have all of those familiar concepts (just presented in a possibly different way) but you dont need to worry about how classes work at all, or even loops in some cases!
What most people do not realize, is JavaScript was designed as a functional language, with a C-like syntax. When I talk about learning proper JS, I mean getting back to its functional roots, because that where it really shines.
If you're open to remote hiring, I highly doubt it, I dont know much about elm (except that the people using it really like it) but the argument is one I've heard before with elixir. Hang out in the elixir slack or a FP discord for a day/week and you'll notice a massive amount of people taking about how much they like the language and wish there was somewhere hiring that used it. The people are definately willing.
Kolme's comment is a very good reason, as someone who picked a non-standard frontend language I'll add my 2 cetns. I'm working on a project that went from javascript to reasonML when we felt the lack of a really strong type system was making managing the complexity of our app (which is unusually complicated) unmanageable.
When I was evaluating Elm it was my favorite choice out of all the various strongly typed flavors of javascript, clean syntax, good abstractions, and very strong runtime guarantees. The thing that made us reject it, and will keep me from recommending it to others, was the change made, I believe in version 0.15 which, as I understand it, restricted FFIs so that only core maintainers could write them. I know several companies have frozen their version of Elm due to the change.
Particularly when working with external packages written in JS, not having an escape hatch is a huge vulnerability. We've already hit one instance since using Reason where if we could not drop down into pure JS we would have had to either rewrite one of our main dependencies or at least hard-fork it and re-write substantial portions of the application, a multi-month (maybe multi-year) slowdown that would have killed the project.
At this point I would not recommend Elm to anyone who is not willing to rewrite any pure javascript module they might want to use (this may fit the bill at some very large corporations).
> But what happens when you need to do something in JavaScript? Maybe there is a JavaScript library you absolutely need? Maybe you want to embed Elm in an existing JavaScript application? Etc. This chapter will outline all the available options: flags, ports, and custom elements.
Ports have always existed. The change was to stop people writing "effect modules" which were the unsafe version of this where you could run arbitrary JS.
The upside to this is that because ports work by message passing, it means that all the language's guarantees can be held even when using them, and packages can't do anything without you knowing about it.
The downside is it is a lot more verbose and awkward. In my opinion, it also makes community contribution to the standard library a lot harder.
Those languages are also subjectively much harder to learn. I liked Elm and really tried to learn it on the side, but even after a week or so of spending time with it I had a hard time wrapping my head around how to simple DOM updates.
Compared to that, React took me a day to get comfortable with. There is an argument to be made though, that behind that day there's potentially all those years of working with imperative languages which just isn't there for Elm.
In my situation we specifically avoided Elm due to the instability of their documentation. v0.18 docs were cleared off of their servers and replaced by v0.19, with unannounced breaking changes.
The (B?)DFL pushes that even though it's not hit a 1.x version that it's production ready and stable, in our experience that isn't the case :(
How does "use Elm" fixes the problems mentioned in the article? Elm is compiled to JS after all and has to be downloaded/parsed, which is what the article is talking about regarding cost.
After playing around with elixir and Phoenix, elm is the last point of the trident I've been meaning to touch. now if only I could find somewhere near me that uses any of those...
haha, programmers sure do love to virtue-signal with regard to JS. We get it, you're a sophisticated programmer that wouldn't be caught dead writing JS if you had a choice. It's a clever quip but not really contributing anything to the discussion of the topic at hand.
Ok. Just so I can understand a little better, in an ideal world, which part of my comment would I have omitted so that it wouldn't be considered a personal attack? I understand that the tone is sarcastic, but I am curious which part HN moderation regards as a personal attack. I didn't name call or use profanity, so it comes as something of a surprise to me.
> it seems to suggest optimizing your site for V8 a bit too much.
+1. After seeing some opinions that Chrome is becoming the new IE over the past few weeks, I've switched to Firefox to do my small part in trying to prevent another browser monopoly.
Which parts of the article aren't applicable to Firefox? It seems like most of these idioms should give advantages to SpiderMonkey (if it's still called that) and V8.
Tip no. 1: Don't put any code on the page that doesn't need to be there, then any minute differences in the exact implementation likely won't add up to much.
I would also be interested in the cost of developer time lost by using the language itself and compensating for it’s almost entirely lacking standard libraries.
I have usually found, installed and started using a good javascript library in less time than it would take me to make sense of MSDNs enterprise-soup documentation.
Hmm. When I need to find a specific JS library I do a search and there are a good 20+ of them all with different dependencies, levels of support, different past issues, etc. It's so difficult to pick a good one. I mean, you can just _pick one_ and go on with your life but if you pick the wrong one it can really bite you.
Whereas with C# or Java or Ruby or Python or countless other languages the standard library is big enough that a quick search, find an example and boom I'm done (I always thought the MSDN was amazing; it's like Mozilla's JS documentation but for C# with tons of awesome examples). If it's highly complex I can find a library that likely depends mostly on standard library functions and not have to worry about tons of dependencies.
I do a ton of development in JS and I think its standard library is one of its biggest flaws. ECMA would rather avoid adding too much to it as they seem to view the language as a small, scripting language and the platform should provide everything. That's a valid view, I think, but I don't agree with it at all. It's grown far too much to keep that view IMO.
What does MSDN have to do with this? Are you talking about C# and .NET specifically? Have you seen the current documentation? https://docs.microsoft.com/en-us/
Can you provide some examples of documentation you do like?
While I can't ever remember reading some documentation and saying, "Wow, that was fun!", so I can't say there's any documentation I like. But I do find that the MSDN documentation usually gets me working, and not reading documentation much faster than others for e.g. Python, Java, JavaScript, CSS, my pedometer.
I'm a bit out of the loop regarding C#, but I don't remember having trouble finding what I needed in .Net itself. MSDN was bad back then but StackOverflow has been an excellent resource for C# from the beginning.
As a JS dilettante, I found this incredibly surprising.
const data = { foo: 42, bar: 1337 }; //
>>> can be represented in JSON-stringified form, and then JSON-parsed at runtime:
const data = JSON.parse('{"foo":42,"bar":1337}'); //
>>> As long as the JSON string is only evaluated once, the JSON.parse approach is much faster compared to the JavaScript object literal, especially for cold loads.
I hate these microperformance hacks. There’s no way you’re going to have a bottleneck here and you’re making the quality of your code dramatically worse.
I hate that this micro performance hack actually is better - why don’t they just fix this in the JIT so that the literal is just as fast as the JSON parse? Why do we have to think about this?
This is happening way before the JIT phase, at the parsing phase.
And the reason it can't be as fast as a JSON parse is explained in the article: The grammar of JSON is much simpler. The literal parser in normal JS needs to deal with things like:
so needs to do more branching than a JSON parser. And yes, you can pre-scan the string to make sure it's not doing anything like that... but that probably already costs more than the tokenizer phase of a JSON parse, and it's overhead you still need to do that a JSON parse doesn't have to do.
Also micro performance becomes macro at scale. This is not something that you worry or think about with a handful of literals. It would be good for an inner loop or code generator.
Sounds like some sort of compiler hint like ‘expect json’ or ‘use json’ right above a json literal would be useful. Assuming it matters enough to be worth it.
IANAProgrammer but couldn't it do like try..catch and assume it's JSON compatible and fall back to the [much] slower method if the parse fails? Does that cost too much time/resources?
Isn’t that effectively what a backtracking parser does? My naive guess is that’s already roughly how the JS parser works, and that process is what makes it slower than the simpler JSON parser.
We already established it's far slower parsing js; in most cases JSON parsing would fail - IIRC - on the first character ^[^{] ... so would it be worth being slower by the time it takes to check the first character, and then only JavaScript that started with { would be slower. Which I guess is virtually none.
I suppose those sorts of questions are part of what makes language design interesting.
While it's possible that there's room for improvement, it makes sense that parsing a subset of JS (JSON) is faster than parsing JS. JSON is just so much simpler.
I understand what you and OP are getting at and I'm saying it's wrong but it misses the broader point that others posters were positing. The exact same point I came to make but found myself in good company. Why?!? And it's not specific to JS or a direct criticism of JS, this issue it becoming endemic. It's happening everywhere now. We shouldn't have to depend on language specific hacks to make things "work" in that language. JS has gotten so out of hand now we have/need new metalanguages like TypeScript, CoffeeScript, Bable, (insert buzzwordy JS lang here) that transpiles into JS because it's become so unwieldy. All this just to be able to make a simple web app. The fact that V8 can parse JSON faster than processing a native JS literal is a compiler problem. Why is it suddenly my problem? It's one thing if it's a bug that we have to temporarily work around (these things happen) but now it's becoming the norm. How are people new to the industry supposed to learn something like JS if the only way to be proficient at it is through an endless, ever changing array of hacks? Especially when it's something as counterintuitive as parsing a string is somehow faster than the actual equivalent code. The "move fast and break things" mentality that has permeated tech as of late has done just that, got us nowhere fast and broke everything along the way.
Javascript is messy but it's everywhere and has gotten significantly better with recent versions. Typescript is just a different flavor offering strong typing and other features, just like Scala or Kotlin can also replace Java if you want.
This particular hack is absolutely nothing you have to worry about, just like esoteric performance tweaks available in any language stack. Write the JS you need and use it. Then profile and optimize as necessary.
You can apply all of the same reasoning to assembly. People did write that by hand at some point. Then they abstracted that away into higher level languages. I’m sure there’s a ton of hackery going on there as well.
I met someone recently who had to do pretty high performance JS to ship webapps targeting feature phones - presumably every bit of performance helps there and as someone else mentioned, the compiler can probably do this bit for you
Sure, but 99.99% of devs don’t need to know this and probably shouldn’t know this lest they decide one day “this will be faster so I’ll just do it this way” not thinking of how awful and confusing it will be for everyone else.
Plus, in some future implementation, it could happen that the current faster way becomes slower, due to changes in the JS interpreter / JITter etc. At least, think it could. Not a language lawyer or implementer.
It's actually pretty common for single-page apps to have a large amount of configuration data (e.g. all the client-side routes) and potentially a large of amount of application data sent from the server-side renderer. And code compilation can be a significant part of the cold-boot time for large single-page apps. I haven't verified, but it's very conceivable that this would significantly speed up time-to-interactivity for a very large single-page app.
A good rule of thumb is to apply this technique for objects of 10 kB or larger — but as always with performance advice, measure the actual impact before making any changes.
Might be worth investigating if you have a ton of JSON.
> There’s no way you’re going to have a bottleneck here and you’re making the quality of your code dramatically worse
A tweet was going around that this change lead to a 30% increase in time-to-interactive.
If you're dumping Redux state from the server onto the page, you could benefit from it, and it's really only one place so its not actually making your overall code "dramatically worse".
I worry this is going to become new accepted wisdom without being tested fully (it's probably already a Webpack plugin). My questions:
- What about other browsers?
- how big does it have to be to have a meaningful effect? I've seen an example out there with someone demonstrating amazing gains... on their 163KB of initial state. That's not typical. Or at least I really hope it isn't.
Since JSON’s syntax is much simpler and smaller, it’s faster to parse a very large object as JSON and return the newly built JavaScript object rather than parse the object tree in JavaScript, which has a much more complex syntax parser. Obviously there’s diminishing returns for smaller and smaller objects.
The Json version only requires the full, complex JS parser to find the end of the string parameter. The string itself is then passed to a specialized Json parser that is faster because the language is a lot simpler and it has to check less possibilities at each point.
Apparently this is faster than running the whole string through the full JS parser that has has to deal with all kinds of portential shenanigans in that block like references to variables instead of literals etc.
But that only applies to the parsing step. With JSON.parse you will parse it every time the code executes, so if that code is in function, JavaScript probably will be better, because compiler can't just optimize out JSON.parse (because someone might redefine window.JSON object or redefine its parse property).
indeed! and i was going to say the same thing. it makes perfect sense if you are a programmer with a basic understanding of how languages work. but like someone else said, i would not have thought about it first thing when writing code.
Perhaps the reason many (like myself) did not immediately expect the speedup is that they started on or are still using another non-scripting programming languages.
There, the language parsing penalty is paid during the compile step, and as far as a user is concerned the object approach is obviously faster than parsing from JSON.
This does leave me wandering - is the result here applicable to other scripting languages too? The argument in the article would seem to apply in the general case, but perhaps other languages have some particularities that change the result.
Another strategy is to make the files cacheable, and have some version naming scheme to invalidate caches. You can also lazy load features as most users will likely only visit the front page and then leave after 3 seconds. No need to have users wait for functionality they wont use.
Several of the bundling tools have had a way to emit the file with a hash of the contents on the end of the filename, rather than using version numbers.
One of the places this helps you is when you push out a one line javascript bug fix without invalidating your CSS caching across the entire site, because the CSS files are identical.
It also helps you with people with the website open when the new code gets pushed. They either have all the old code or all the new code and no bizarre mishmash of the two.
With making the files cache-able I mean not bundling them. The first load will be longer (unless you use lazy loading), but successive page loads will get the files from cache, and if there's an update, only that file will update. It's not usual to have updates shipped several times per day, and if there's many users there will also be many unnecessary reloads of the bundle.
It's a bit of a optimization as to be 100% sure users always get the latest files the version naming scheme is needed, but most of the time it will work anyway, eg. if you have no caching layers /CDN etc in-front of the web server.
So if you want to shave off milliseconds on first load, go with the bundle, but if you want to save user's bandwidth and make your site/app more lightweight - don't bundle!
Most bundlers have ways of splitting the bundle into smaller pieces that can be cached (and cache-busted with the hash in filename) independently. I think that's what the parent commenter was talking about. So you don't have to choose between one huge bundle and hundreds of individual non-bundled files, with code splitting and lazy loading, you can do something in the middle and have like 3-30 "minibundles". Then, if you make a modification that only affects one of those bundles, only that bundle needs to be reloaded and others can still be served from cache.
With their redesign and a few other choices, Reddit has gone from one of the most beautiful places on the web to the second most hostile UX that I put up with.
I don't want to "GET NEW REDDIT" as I'm urged to in the top left of every page because I don't want a card-based layout. The whole reason I and so many others left Digg for Reddit to begin with was for a highly skimmable, information-dense site!
Similarly, I don't want to install a mobile app. The page worked fine on mobile when I got my first iPod Touch a decade ago. Why do I have to see a huge USE APP button in the nav bar and then lose another 20% of the page at the bottom to a See Reddit In section at the bottom that's also urging me to install a native app?
Infinite scroll actually slows down my browsing experience too, since it no longer loads as many entries at a time and I have to keep waiting for pagination.
They lost me as a premium subscriber because they no longer offer a discounted annual subscription plan and instead expect me to pay $6/mo ($72/yr) for ad-free access to a site that is comprised of user-generated content. That's more than the subscription price of some actual news sites.
> How do you implement infinite scroll without JavaScript?
Technically it's possible.
Render the website server side as an image. Chrome 75+ supports native lazy loading of images, so a series of them stacked vertically is needed. Each image represents a new page, scroll through them to navigate.
Click detection can be done with a <map> tag.
I think you can also just keep sending html. Send a page with page2.png at the bottom and keep the connection open. This is a sentinel image to detect scrolling — when page2.png is requested, use this as a command to tell the server to start sending html for page 2 on the original connection, which would have page3.png at the bottom.
It doesn't put a big red "GET NEW REDDIT" in the upper left that does a single-click update of your account settings? What region are you in?
> reddit.com/.compact
I just tried that and the top 40% of my screen was spent on a banner telling me, "You've been invited to try out Reddit's new mobile website!" Clicking that lead me to a page where both the top bar and the bottom section harass me to install their mobile app.
I use https://ns.reddit.com along with Firefox's Redirector add-on to make sure that I always stay on that domain... not sure how much longer it will stay around though
> Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://www.reddit.com/r/all/best.json. (Reason: CORS request did not succeed).
I don't have any specific plans about opening the registration up yet, but you (or anyone else) can just email me for an invite. It's not intended to be difficult to get one, I just want to keep the site's growth under control.
Largest cost of JavaScript is having to write JavaScript. It’s insane anybody would willingly do that nowadays. I think literally everybody writing JavaScript secretly wishes they were using some other language with JS transpiler.
I refuse to believe anybody can honestly say they want to write JavaScript code unless it’s code that makes them write less javascript in future.
I have yet to find a better package, when you transpile, debugging always has that funky thing with two different source codes where you wrote one and the other is what is running. And it is not like you can do anything that you can't do in JavaScript, either you have a feature subset, or you ship with a library that the transpiler attached.
You miss the point. The issue isn't whether it's nicer to use a transpiled-to-JS language or JS itself. The issue is that JS as the root is an unfortunate situation, and there are so many other languages that would be superior.
Just look at the history of JavaScript. It does not deserve the attention or human time that it currently gets (compared to other languages, assuming we could all decide simultaneously to replace JavaScript with another language).
I have a hard time justifying using any other dynamically-typed language over JS. And JS' static typing story is pretty mature. And it has cool features like async-everything and a first-class Promise.
So, let's not be too extreme here just because you don't like something. It gets really silly to hear HNers suggest the equivalent of "I don't understand how people own blue cars. They must secretly aspire to drive $myFavoriteColor. Don't they realize how stupid blue looks to me?"
Great article. I applaud the massive amount of effort that's resulted in JS being far faster than anyone ever expected.
It's time to switch to WASM though. Half native speed with almost no parsing overhead. As integration and tooling improve I see no reason to stick with JS.
It also rectifies a number of historical mistakes that are nearly impossible to fix. Mainly threading. WebWorkers etc are a huge hack. No atomic data structures, no shared memory, huge overhead. WASM will be many times faster than JS from threads alone. Add the natural speed advantage and you're looking at maybe a 20x performance difference.
Yes. Rustaceans have stepped up and provided a ton of nice shim layers though. So practically, you don't have to actually write your own js to interface
This isn't quite true. It's only true if you're browing from a computer that is moderately fast (at short timescales before throttling at minimum) and using a browser that does not respect your software freedoms. There are literally hundreds of millions of us out there where executing javascript code on sites made entirely out of javascript can be significant. Weather it's because you use many, many tabs, or browse using an old smart phone.
But of course that's not the real cost. That's the computation cost. The real cost of websites switching to an application model directly targeting the DOM is the loss of accessible webpages and the chance that every time you're going to get owned. Weather it's some information leak from speculative execution or something else.
Running JS on every page these days is just as stupid as opening every attachment you get emailed to you.
"On mobile, it takes 3–4× longer for a median phone (Moto G4) to execute Reddit’s JavaScript compared to a high-end device (Pixel 3), and over 6× as long on a low-end device (the <$100 Alcatel 1X)"
Notable that they avoid the iPhone comparison. It runs circles around even the top end Androids for JS performance.
1. v8 isn't available on iPhone, so they couldn't really show any results.
2. Unless you have the Samsung SoC variant of just released Galaxy S10, pretty much all Android JS performance, including flagship, will be slower than the current entry iPhone, which is the iPhone 7.
And since the current Android flagship, along with middle range and lower end still has another 4 years of life in it, with roadmap that shows very slow improvement in the middle to low end Smartphones. May be we should just go back and ship HTML instead?
Maybe you misunderstand what this article is about, s not about what is the best phone, is about how JS is run in real world where not everyone has super fast internet all the time and not everyone has a super fast phone or computer.
So if you are a web developer you get some good advice from this article related to JS not what phone to buy.
They are making the point that low end phones are much slower in executing JS than high end phones. Perhaps to remind developers that their experience, on their phone, isn't the experience everyone has.
I imagine a lot of developers have recent iPhones. They are, depending on benchmark, up to 3x faster than a flagship Android. Which means the top to bottom chasm is much larger than the snippet suggests. Noting it would strengthen their point.
OK, sorry I missundertoo then the iPhone point, my bad, I interpreted it as iPhone is super fast so nobody should forget to mention in any article the iPhone is the fastest phone and Intel i9s are the fastest CPUs.
I agree that we the developers should not forget that a lot of people have less powerful hardware.
I sometimes hit thus problem at work, say we offer feature X like uploading an image and you can crop that image and apply a filter, how should I add this feature but not have the code even load in the browser if you don't intend to use, is there a pure JS way to do it? AFAIK thee is no good way to load a script at runtime and get an even when the file is loaded and parsed.
Thanks, I use this techinque but I had issues with it in the past, I tried a few minutes now to find documentation to confirm if the onload guarantees the script is loaded and finished parsing/interpreting but I can't find it.
Your example code works in all the browsers I tested so maybe it was an issue ith th particular script I was testing, there are third party scripts like for embedding an image editor, those could also use this trick to load some dependencies.
Great. so I was wrong then, this should work with most third party scripts.
Not sure if this is news to you but the Chrome browser on iOS (as well as ALL other browsers on iOS, by Apple stipulation via app-store rules) uses WebKit underneath
We do not grow organically by people stumbling on our app and thinking "wow that was fast". We go through months of enterprise sales process to ink a deal, then onboard maybe 20 key users at the company.
To put the effort into code splitting would be purely an exercise in keeping up with the new hotness. That's not to say we don't keep a close eye on the package size, just that it's not much of a great optimization for a regular user's experience in our case.
Also serving all assets from the same domain saved us some time in domain resolution.