People seem way too optimistic about web assembly.
The elephant in the room is download size. A wasm photoshop, even if it works and performs well, is still a multi-gigabyte "web page". The browser is in no way set up to handle that.
Even simple things will be huge compared to javascript webpage. Let's say you write your todo app in Python with Qt bindings. Sure, wasm lets you run it on the web. You'll just have to ship Qt, the python interpreter, low-level graphics rendering code, shims for system calls in the standard library ... overall you'll end up way, way heavier than the javascript + DOM version. Probably 100MB or something.
wasm as it's implemented is going to have a very narrow band of usefulness. Basically, isolated computational modules (e.g. a physics simulation), and games. The games won't be, like, "Call of Duty on the web", they'll be little flash-type games with not too many assets. Creators will have to put a lot of effort into asset loading and compression to get around browser limitations.
Except that the download size for an application of the same complexity in JS and Wasm looks like it will be smaller in Wasm. There are plenty of demo games made in Wasm that play great already. Doing the same thing as those demos requires larger JS and is slower in JS.
This just expands on what can be done now. No one is claiming this will fix everything, it will just allow more than we have now and leverage many of the advantages of native development.
As for photo shop, why does a Wasm application need to be structured the same way as a desktop app? Desktop apps shipped all the the software in a single download because that download came from a disc or was expected to be used after being disconnected from the Internet. Why not download the functionality piecemeal? Download a small set of libraries that enable core functionality, then download modules as they are requested. When the user clicks a menu or button go get the code that makes the functionality behind that work. I am not saying break up every button, but if there is a new screen of group of functionality, make that a module then go get that.
In a video game it could be broken into levels or regions the in game map. In gaming paged loading is a solved problem, it seems like it could be applied here. The next level or nearby regions can be downloaded while the player explores this one.
Exactly. If it's a packaging problem, we'll just need to be smarter about how we package things. Shipping Photoshop as a unified blob of WebAssembly is a dumb idea, just as having the executable load in every single DLL it might ever need is ridiculous. These things are fetched on demand.
You're comparing JavaScript apps, where the runtime is built-in to the browser, against WebAssembly apps written in other languages with large runtimes. That's nothing more than stacking the comparison to get the conclusion you want.
Not everyone writes their apps in JavaScript. That's a given, we have no choice but to accept this. Some apps or libraries are already written in another language, and it would be nice to run them in the web browser, so people do that. Some people make web games using Unity. Lots of reasons that this happens.
So you end up with emscripten + asm.js, delivering a minified JavaScript blob multiple megabytes in size, which must be parsed, compiled, and executed. WebAssembly reduces the size of the blob you have to deliver and makes the parsing + compiling steps much faster.
You may not like that, but you don't have to write apps that way. You can still write JavaScript if that's what you like. WebAssembly delivers improvements for people who weren't writing JavaScript in the first place, or for people who wish they weren't writing JavaScript.
> The elephant in the room is download size. A wasm photoshop, even if it works and performs well, is still a multi-gigabyte "web page". The browser is in no way set up to handle that.
So you split it into chunks and download the bits you need as you go.
Microsoft has already figured out how to do this -- you can run Office (the real, full Windows version) basically streaming from the Internet already.
> So you split it into chunks and download the bits you need as you go.
at last, the renaissance of the microcomputer programmers (like the COBOL renaissance of the mainframe programmers)! that kind of technique was common back in the 80s to get things done
That's funny - I stopped using Microsoft Office because of that very thing. I have Office '07 still for some minor graphical compatibility reasons, but generally speaking I use the bundled, free, offline, WordPad/NotePad, LibreOffice, and/or AbiWord.
WebAssembly APIs give you access to the compiled code of a module. A Web page can store that code locally, e.g. using IndexedDB, so it doesn't have to be downloaded and recompiled again. Thus, the first time you visit a page, you're effectively installing the app. See https://developer.mozilla.org/en-US/docs/WebAssembly/Caching... for details.
We already are getting desktop apps with HTML based UI [1].
A wasm photoshop would cache the download, use a HTML front end, and be written in a compiled language.
Sure you'll be able to use the tech to do stupid things like running a python interpreter inside a WebAssembler virtual machine. But then your argument is just that if you use the tech to do stupid things you'll get stupid results. Total straw man.
Given an enormous WASM binary one could write a small bootstrap binary that fetches the binary and stores it in local cache. Future updates to the mega-binary can be done with some kind of delta-diff mechanism.
Now, let's say you rely on Qt in some manner. Well, likely the WASM package resides at some known URI and is subject to standard browser caching.
A more likely and practical outcome is using the HTML5 DOM UI instead of Qt, and JavaScript instead of Python, etc. Native web technologies the browser already has support for.
Using web platform technologies can actually reduce the download size over native apps. In the game I am working on, written in C and compiled using emscripten, as a rough measure, the equivalent program compiled natively totals an executable size of about 2 MB, not including texture resource data. An optimized asm.js build is about 950 KB and WebAssembly only 580 KB, this includes the .html shell, .js loader, and .wasm binary itself.
This is not a completely fair comparison because I compile out some native code not relevant to the web, and vice versa, but here are a few specifics of where I believe the gains may come from:
curl: the native C app uses libcurl for fetching resources from HTTP and HTTPS servers, but on the web, we have XMLHttpRequest and HTML5 Fetch. Emscripten provides the built-ins emscripten_wget() and emscripten_async_wget() for these purposes. No need for shipping HTTP and SSL stacks because the browser already has it included.
glfw and glew: libraries for wrangling OpenGL, emscripten has its own implementation which largely simply bridges to WebGL or other HTML5 APIs, a very thin layer. SDL, too.
databases: many apps bundle their own copy of SQLite, often as the single C file amalgamation. I used to, too, even through emscripten and it worked fine (there is even a pre-packaged emscriptenified sqlite.js), admittedly I haven't looked into it much yet but the web platform supports IndexedDB built-in, no extra dependency needed.
JSON: how many JSON encoders/decoders are there out there, separate copies in all of the apps? On the web, you can rely on JSON.stringify and JSON.parse (from JavaScript, but all functionality is bridged through WebAssembly anyways).
We may see a resurgence in "small C libraries" targeting WebAssembly. There is a growing trend of header-only libraries, especially the popular stb: https://github.com/nothings/stb#stb_libs and there is a growing list here: https://github.com/nothings/single_file_libs I consult to find tiny libraries appropriate for linking into a web-based C application. lodepng (for decoding PNG images) and miniz (for reading and extracting zip files) are about the only substantial dependencies I have beyond what is in the emscripten standard library. Everything else is provided by the browser, the web has grown to a surprisingly powerful and complete platform.
This article in few lines lists key differences between PNaCl and WebAssembly:
---
WebAssembly defines no new platform APIs other than some APIs for loading and linking WebAssembly code, relying on standards-based Web APIs for everything else. WebAssembly differs from asm.js by defining a bytecode format with some new operations JS doesn't have, so some spec work was required (and has been done!). Like asm.js, WebAssembly application call-stacks are maintained by the JS VM, outside the memory addressable by the application, which reduces the exploitability of application bugs. (Though, again like asm.js and unlike PNaCl, the compiler is trusted.)
---
The security model for nacl is afaik that you run almost arbitrary native code. It was compiled by someone else, your input is basically machine code that you don't know anything about except for some please-dont-break-my-sandbox checks.
With asm.js, your input is javascript code which can't do all the weird things that native code can do, and you're in charge of generating the native code that ultimately gets executed (much like the traditional JIT setup), so assuming you didn't fuck up the translation, you get to assume it can't do anything that javascript code couldn't have done to begin with, relaxing your sandboxing requirements around, like, arbitrary memory accesses or syscalls.
It means that a bug in the compiler/optimizer could allow WebAssembly code to escape its sandbox.
This is a real issue, but o some extent this extra attack surface is mitigated because vendors are reusing JS compiler backends that are already part of the TCB.
The "compiler" here refers to the WebAssembly -> machine code compiler, not the C -> WebAssembly compiler. NaCl used some clever tricks to guarantee the safety of raw machine code.
Both PNaCl and WebAssembly need a compiler to a get the machine code run by CPU, but the output of PNaCl is checked via NaCl verifier while nothing checks that the output of WebAssembly is safe machine code.
As the verifier for NaCl is likely to be an order magnitude smaller than a component in WebAssembly implementation that verifies and compiles the bytecode to the native code, PNaCl attack surface is much smaller.
> nothing checks that the output of WebAssembly is safe machine code
By this line of reasoning, “nothing checks that the output of the JavaScript compiler is safe machine code”. Because that is essentially what WebAssembly is: like asm.js, it's just another kind of input to your JS VM.
And… well, you're right. It's the JIT's job to produce sensible code, and it's the browser's job to sandbox that well.
But what's wrong with that?
At least with WebAssembly it's a well-defined, small language that's easy to verify.
>nothing checks that the output of WebAssembly is safe machine code.
The JS/WASM VM does. If unsafe WASM code is allowed to execute, there is a bug in VM. VM must prevent semantically incorrect WASM from executing Control flow integrity and incorrect use of pointers are detected at load time. There are traps for invalid indexes, exceeding stack limits, invalid indexes in the index space.
This is not about unsafe WASM which should be rejected by verifier. This is about the raw machine output that the compiler generates from WASM. As with JIT for JS currently nothing verifies WASM compiler output. So a bug in the compiler may result in a WASM that passes the verifier to be translated into unsafe machine code.
Same thing applies to NaCl verifier. Bugs in verifier can cause problems. Verification or generation bugs are bug, not security weaknesses in the architecture.
(I should not need to mention this but compiling is form of verification).
"Same thing applies to NaCl verifier. Bugs in verifier can cause problems. "
Not like with the other, though. You can statically verify NaCl stuff because it was designed for that to be easy. If WebAssembly doesn't do that, then it's a step down from NaCl at least on this risk. That it matters should be obvious given all the research by the kind of CompSci people that invented NaCl on eliminating that risk from the TCB. It's part of a larger concept where you make the things generating the risky code untrusted with the trusted checkers so simple they can be mathematically verified. Here's an example from a top group on NaCl:
I'm not worried about this too much as it will probably be some brilliant student's personal project in near future if it's not being researched already. A quick Google shows the Github has the full, C semantics with compiler (KCC) and a Javascript semantics at least has a prototype from 2015 of WebAssembly.
This is not true. NEXE has different security boundaries enforced differently on each supported platform. You need to write disassembler for each supported instruction set and verify them. This is more complex task than generating correct code from wasm.
NaCL verifier is just a loop that essentially matches the instructions against a white list and checks their format and offsets. WebAssembly needs parser/linker/optimizer/assembler. Granted that the format is optimized for fast translation, but just the amount of code to support data structures in the implementation like maps, lists etc. must be big.
NaCL also needs those things because it has to compile it's bytecode to machine code on multiple architectures. It's no different than webasm, just a different bytecode. A big advantage of webasm is that it's integrated into the existing javascript VM, that has already been sandboxed and battle hardened.
As in my other comment, something like that can be mathematically verified for correctness as well. That requires simplicity if one doesn't want to throw person-years of work at it w/ possibility of finding out it was impossible. My comment links to a formally-verified checker for NaCl as well.
It would be possible to leverage a simple machine code verifier similar to NaCl's in a WebAssembly backend. Nobody does it right now, but there's been some work on taking WebAssembly as input to their Subzero PNaCl->NaCl compiler.
I think this is a good thing. It probably prevented a new DirectX style dark age of the web.
Probably! I don't pretend to be able predict what path companies would have went in the next few years.
It's really good to see that there is a will to agree and that there is more than one player. Could always be more, also for keeping standards sane. (WebSQL officially failed because there isn't more variety)
DirectX fostered multiple generations of gaming on PCs. I dunno what you're talking about with referencing it in a web development context, but it was, and still is, miles ahead of dicking around with broken OpenGL.
WebAssembly is nice and all, but I don't understand why Mozilla is so obsessed with this feature that will be useable by 0.01% of applications. Meanwhile they are falling far behind in a variety of features that are useful to a much bigger % of the web. Safari and Edge have leapfrogged Firefox in providing the important things to web developers.
For starters, this will enable client (browser) software development in a language other than Javascript/ES6. With, probably, a plethora of compilers to choose from, some of them giving better optiimizations than others.
I think this will make an explosion of more browser-hosted applications, with much more power than before. And will make many more programmers go into serious frontend app development.
Also, as a personal wish of mine, this enables the possibility of being able to program on the language of your choice, both in the browser side and on your server side. For example Haskell/Haskell, Common Lisp/Common Lisp, Clojure/Clojure, Racket/Racket Python/Python, etc. And i mean using in the browser the FULLY FEATURED version of the language, not a subset or a limited version like ClojureScript, PyJS, Transcrypt, etc, but a full version of the language supporting the full libraries available for it.
This also gives us a little step forward in liberating ourselves from being tied to the mainstream operating systems: Windows, Apple X, Linux, BSD, because more and more apps will target the browser environment, not the operating system directly.
Roll your own operating system and still keep compatibility with most apps!!
People have been writing client software in those languages for almost 10 years now. Object J came out in 2008! Not saying WebAssembly won't make this better, maybe even a lot better, but this is not opening new capabilities the way that, for example, shipping modules would.
I think there are still limitations around threading, but basically that's the goal. I think of it as the promise of Java (write once, run anywhere) re-implemented with buy-in from all the major vendors. (So no MS/IBM/Sun fragmentation).
I mean... Theoretically you could make a big fuck off web app right now in JS, it's not like WASM == huge apps. It sounds to me like the back end coders will coming forward to the browser where before you had the front end moving toward the back with things like Node.
I don't understand why Mozilla is so obsessed with this feature that will be useable by 0.01% of applications.
What follow is speculation:
At guess they feel burnt after they lost the whole closed source DRM in the w3c spec battle. They realize that something like WebAssembly will become a thing in the next few years and that if they don't push super hard for a completely open solution from day 1 then they're afraid that Google, Microsoft and Apple will get together in a room and make a deal without them.
At the end of the day Mozilla don't only care about delivering a browser, they care about delivering a completely open browser, and they don't want their ability to deliver that to become more threatened in the future.
I think you will be surprised how many apps will use WASM in the future. I'm a C++ dev and working with WASM (and asm.js as a fallback) full time. It's going to be absolutely huge.
Even if it causes Photoshop and CAD software and such to be ported to the web (which would be a huge business model change, so don't hold your breath) we're still talking about a tiny percentage of apps. There's no reason why Slack would rewrite in C++.
Not talking about re-writing. But there is an enormous amount of code out there in c, c++ and other langs that people would love to use in web apps.
In our case we use the same library code on iOS, Android, Windows and in the browser to do computationally expensive operations.
Also, to dismiss productivity apps such as PS and CAD as unimportant is kind of crazy. Huge business model change? They are still an enormous part of the software industry and the browser a great platform for distribution for many cases. Performance and code secrecy being two barriers that WASM solves for them.
The question of interest is "In a world where webassembly exists and is well-supported, would Slack have been written in JavaScript (or a JS-targeting language like TypeScript), or was there some other language the developers knew better that they could have worked faster in but the JavaScript tax prevented that option?"
Is it? Since Adobe switched to the subscription model and the "Adobe Creative Cloud" I'd say they would be quite happy if that infrastructure would be good enough to run their very big application suite(s). Not to mention the savings of not having to support two major platforms (and several OS versions for each) - even realizing them only partially would be big. Of course, given the size of their product I'd say there is little use in talking about this at this point, the web platform would have to mature a lot more first.
> this feature that will be useable by 0.01% of applications
If they do this right, I suspect it will be much more heavily used than that. I can already see a world where every major webapp is using this (indirectly, using a language that compiles down to this) for the performance and user experience improvements it could provide.
You mean the thing that is not actually supported in any browser other than Chrome, because Google just came up with it a few months ago to replace <link rel="prefetch">, which worked in most brosers?
(Yes, I know Safari technical preview has support, but shipping Safari does not. We'll see whether Firefox ends up shipping support before Safari or not; the patches to implement <link rel="preload"> in Firefox got posted to https://bugzilla.mozilla.org/show_bug.cgi?id=1222633 earlier today.)
... a chrome only feature, pushed by Google, that isn't considered a standard?
This type of thinking and behavior is going to make Chrome today what Internet Explorer was in the 90's: a toxic, dangerous platform that ignores standards.
Use <link rel=prefetch> instead. This has existed for the entire lifetime of Firefox, all the way back to the first release when it was called Phoenix.
Prefetch is a different thing, and it doesn't give priority to fetch the resource. Preloading, for example, a script is exactly equivalent to adding the script tag minus the execution part.
As people have pointed out, that's Chrome-only at the moment but that's also a big moving of the goal-posts. Your original comment was “far behind in a variety of features that are useful to a much bigger % of the web” — precisely how much user-visible benefit do you think comes from a feature which may accelerate the first load slightly?
If that was that critical, it'd be easy to point to e.g. webpagetest.org traces showing Chrome loading a site significantly faster. And, yes, you can find differences in microbenchmarks but it's pretty rare to find something where rel=preload is a game-changer.
Safari Technology Preview is adding support for <link rel=preload> as well. I'm not sure what the current state is, but the last 2 releases both had changes that were related to this.
Keep reading the entire comment: the point wasn't that rel=preload isn't useful but rather questioning the assertion that it's this huge game-changing feature which is causing ordinary people to switch browsers.
If you actually run benchmarks, it's nice but hardly a game-changer, especially in the post-HTTP/2 world. If you're concerned about cold page load times most sites will see significantly greater benefit from using fewer blocking resources.
I don't recall saying that it was "game-changing", I said that it was useful for a large number of web pages, unlike webassembly which is useful for a very small number of specialized needs.
Same for modules, or Intersection Observer or many other features that FF is not shipping. Stop trying to prove me wrong and understand my point. Firefox is slow on stuff that makes regular old web developers lives easier, because they're chasing 3d games.
The problem is that you made a very broad and sweeping claim and then picked a very weak supporting point for it. Web Assembly may not change the game for every or even most sites but for sites which are performance sensitive it's an extremely user-visible improvement.
> Stop trying to prove me wrong and understand my point.
I think you should focus on making your point more clearly rather than defending what was clearly a. For example, you cite modules as something which is apparently a big deal for web developers but not shipped by Firefox. Sounds like Mozilla needs to get cracking … unless you know that only Safari has shipped it and the Chrome, Firefox, and Edge teams all have it available but behind a feature flag for testing:
That doesn't support your narrative that the Firefox team is ignoring this or that they're behind the market — and since anyone who isn't targeting only the latest version of Safari is either polyfilling or continuing to use their existing strategy, so there's an upper bound on how bad that can be, too.
Similarly, with Intersection Observer you can see that it was enabled in FF50 but had stability issues which lead to it being disabled and is likely to be re-enabled in FF54 based on testing. Unless you have some evidence that the developers who were working on that were pulled off to work on WebAssembly it doesn't seem like an especially compelling argument.
Again, I'm not saying that any of these things are useless — only that the narrative you're insisting on where Mozilla is ignoring web developers doesn't seem to be well supported by the evidence. At least for the projects I work on, I'd level that criticism at Safari or Edge first and in the much fewer cases where either Chrome or Firefox has a bug or limitation I need to work around it's about as likely to be Chrome as Firefox requiring extra work.
Again, do you have any firm data supporting the assertion that this form of network prefetching is so critical?
I've read many comments where people mention performance in ways suggesting they have issues with UI performance, launch time, extensions, etc. Optimizing cold-cache performance the first time someone visits a site won't help with that at all.
I'm expecting that a lot of people that don't plan on using it still will. Front-end JavaScript developers using something like SJCL have a much more positive future with those crypto primitives done in something other than JavaScript, even if they themselves are just calling them from a JavaScript binding. Graphing and graphics libraries may turn to web assembly for performance. etc etc.
What are the main benefits of WebAssembly over asm.js?
I understand that asm.js was a subset of JavaScript that allowed the compiler to create faster code. For example because it could be sure that variable types do not change during runtime.
But what do we gain with WebAssembly? Faster download+compilation times? How much faster?
TL;DR: much faster parsing over asm.js (10x to 20x faster), parsing should also use much less memory, 10%..20% smaller downloads (when comparing the compressed sizes, uncompressed WASM is several times smaller then asm.js), 64-bit integers (these have to be emulated on asm.js),
I get 165ms for my 8-bit homecomputer emulator which is 534KB compressed asm.js (http://floooh.github.io/virtualkc/), and it seems the first call into the code takes another 500ms on Chrome (probably for JIT warmup). Bigger apps like UE4 or Unity demos are several times bigger and can take seconds to compile.
If you are using ms as a unit you several orders of magnitudes too slow for many domains.
3d gaming comes to mind, most games have a tight budget of 16ms per frame, and at those speeds 0.1 ms is a real chunk of that. If I am going to download a new module in then try to load it the game shouldn't have to hiccup for that.
The partial intent of WASM is to write things orders of magnitude larger than jQuery. Whether or not it will ever be practical to ship e.g. a complete Photoshop clone in a browser remains to be seen, but if that's your goal then you do have to start worrying about things like parse time.
Why would the whole Photoshop be downloaded and compiled? I would expect a web app to only download the parts the I use. If I want to blur an image, I don't need the other gazillion filters for example.
And hey, even if photoshop is 1000 times bigger then jquery - that would still compile in one tenth of a second.
Another part is that I suspect it makes maintaining a good JS compiler easier, because all of that asm.js code can be removed. I'm sure the browser vendors are happy about that.
I'm not 100% sure about any of this (I've only listened to most of the video above), but I think the gist is that WebAssembly provides a compile target for applications, meaning you can pre-compile an application so that your source is downloaded essentially as machine code. Furthermore, that means your compilation can perform as many optimizations that you care for, because it's happening ahead of time (not on the user's machine).
Without WebAssembly, browser vendors have to strike a balance between optimizing the JS as much as possible (for increased performance), and running the script as soon as possible (so the user isn't waiting too long for execution to begin).
edit: Demo of this in-browser video editor charts a FPS difference between the JS and WebAssembly implementations.
https://github.com/shamadee/web-dsp
Is the "parsing" what is displayed as "compile script" in Chromes profiler? That seems to take almost zero time. For example jquery.min.js compiles in 0.1ms here.
Libraries like jquery are not the target use case for WASM. Its intent is to make large applications viable on the web, like something you'd normally install on your computer.
"Experimenting with a prototype WebAssembly format on a build of our AngryBots demo, we saw the size of the generated JavaScript code go from 19.0 MB of asm.js code (gzip-compressed to 4.1 MB) down to 6.3 MB of WebAssembly code (gzip-compressed to 3.0 MB). This means that the amount of data the browser needs to process gets reduced by 3.0x, and the compressed download size gets reduced by 1.4x."
It seems there is this effort to get C/C++ code performing fast inside of a browser... then at the same time there seems to be an effort to get people to stop coding in C/C++ altogether and switch to something more memory safe.
Go or Rust for example. Also, I watched a talk from DConf and saw that D is adding memory safety as well. You need to mark any part of code doing pointer arithmetic as "system code" or something like that.
I think it's a shame one couldn't have happened before the other. I wish something like D or Go or Nim or something else would have won. Then there would be this effort to get that language fast on the browser.
Is anything I'm saying making sense? I don't know enough about WebAsm, is it really tied tightly to C or could Go or Rust or some future version of statically typed Python become a first-class citizen?
It is not about running C/C++, it is about running native code (or something approximating it). With the right backend, most languages should be able to target WebAssembly. The other complication with supporting Go or Rust is that, even if the language is memory safe, the code still needs to be sandboxed.
Web assembly could allow dynamically-typed Python to become a first-class citizen, if someone were inclined to write a Python interpreter stack that compiled to LLVM (which might already exist? https://github.com/dropbox/pyston)
The purpose of WebAssembly, pNaCl, and its ilk, is to get out from under the unfortunate accident that the "assembly language" target for browsers---i.e. the symbol set the browser can directly interpret and translate into machine operations---is JavaScript, which is not a language designed with memory or runtime performance deeply considered (or type safety, for that matter). While there are projects that somewhat ameliorate this issue (TypeScript, for example, tries to add type safety to the language), writing a something-to-JavaScript compiler is a much taller order than writing a something-to-LLVM compiler, which can then be used by an LLVM-to-WebAssembly compiler---letting you ultimately write your web site in any language you are fluent in.
> It seems there is this effort to get C/C++ code performing fast inside of a browser... then at the same time there seems to be an effort to get people to stop coding in C/C++ altogether and switch to something more memory safe.
WebAssembly offers a compelling "alternative": instead of (or in addition to, if you desire) writing in memory-safe languages, with all the cost that incurs, you can write in unsafe languages and the consequence of memory errors is limited by the sandbox.
This approaches the problem from a different direction, on one side we have e.g. Go and Rust. Go with garbage collection and its overhead, or Rust with zero-(runtime)-cost abstractions which push the burden onto the programmer at development time. C compiled to WebAssembly is low-overhead but safer than native code, giving the benefits of both worlds.
I would have never expected it, but now believe C is the language of the future for the web. Built on decades of history with an unbeatably large existing codebase, extensive analysis and tooling support, standardization, raw power without the shackles of safety, yet confined to limit damage by the browser sandbox. True cross-platform compatibility with powerful HTML5 web APIs.
My experience about a month so far developing a C application compiled to WebAssembly/asm.js using emscripten has been surprisingly smooth. I can compile and test natively, including enabling the clang static analyzer or -fsanitize=address and -fsanitize=undefined to find bugs endemic to C, fix them and then deploy and run on the web. For the most part I can code directly to OpenGL and GLFW, which emscripten bridges to WebGL and other browser APIs seamlessly. I had to contribute a handful of fixes to emscripten, as well as an implementation of glfwJoystick to the HTML5 Gamepad API (also working on file drop and monitor API), but this was straightforward and easier than expected, emscripten happily accepted the patches. There are Rust (https://github.com/thinkofname/steven) and Go (https://github.com/thinkofname/steven-go) applications in this problem space but porting a similar application written in plain C (https://github.com/fogleman/Craft) to emscripten was nearly trivial (if there is any interest: https://github.com/satoshinm/NetCraft). After about a week I was able to consider the web-based port finished, and then focus on developing new features, for both web and native.
Is C compiled to WebAssembly a panacea? Not by a long shot, there are many (perhaps most) scenarios where memory-safe languages such as Rust would be preferred. But for games and other programs where performance is more critical over correctness and safety, WASM is a godsend.
Asm.js was a brillant idea, but wasm is more like a bytecode. This is where is the part from NaCl the author missed. Wasm is derived from asm.js but also an improvement on LLVM to make a portable bytecode, that LLVM is not. One more step in the LLVM toolchain to get portability...
And WASM is now the defacto portable Bitcode. Just as JS took over the world, so will WASM. In six months, WASM will have had more program launches than the totality of .Net and the JVM.
We have yet to fathom how far reaching WASM will be. Did I say WASM enough times? I end with only this. WASM.
> "In six months, WASM will have had more program launches than the totality of .Net and the JVM."
I don't think so. The only languages that can currently target WASM are languages that don't need GC. It'll take a while before the tooling and functionality is mature enough to support the most commonly used languages (aside from C/C++).
JS didn't take over the world, web did. And if we don't talk plugins, then there wasn't much choice in the matter if you wanted to do things clientside.
One of the best things about Flash was that you could pack up all your bytecode and bitmaps and vectors and sounds and fonts into a single compressed SWF file that contained your entire application/game/demo. That made it really easy to do stuff like double-click-able client demos and deliverables for websites. Ditto for Java and JAR files.
Looks like the "compressed bytecode that runs really fast" part is now a reality. I'm hoping they'll work on making deliverable packages that are convenient for programmers and users.
Anyone know if there is there anything like this in the works?
This is just my personal viewpoint with nothing to back it up, but I doubt Google ever seriously wanted other browsers to implement PNaCl. As this post says, it was never specced and Google's big push for it was in Chrome apps on the Chrome Web Store.
Also, wasn't asm.js just a subset of JS? This line confuses me:
> asm.js and PNaCl represented quite different visions for how C/C++ code should be supported on the Web
Yes you could compile C to asm.js, but it would still be compiling it to JS at the end of the day. WebAssembly is completely different in that regard.
>Yes you could compile C to asm.js, but it would still be compiling it to JS at the end of the day. WebAssembly is completely different in that regard.
WebAssembly (at least to start with) is mostly intended as a more compact binary encoding of asm.js. It still mostly executes the same way and offers the same APIs as asm.js.
To the extent that asm.js and JavaScript are, yes. Everything is well-defined, so in theory any Turing machine should be able to run it, but it's certainly going to be easier to implement efficiently on a typical modern 32-bit or 64-bit CPU in a computer or phone than on something more unusual.
Yep! It's a completely portable target and fulfils the promise of “compile once, run anywhere”, although how well it runs depends on the particular browser.
(It is pretty amazing you can compile C code to something that actually runs on multiple machines, it's very nice.)
That is a good question. I imagine it depends on why it is a native module. If they are native module because it utilizing features at the OS or machine level, then I doubt they could.
That's not really true. With JS glue code (including many libraries already written) Wasm modules can access anything JS can do. E.g. there are Wasm demos showing access to WebGL2, audio generation, camera/microphone, HTTP, storage, gamepads, ...
That's the point. a WASM app has access to the same Web APIs that JS does, for the most part. This was in response to the claim that WASM only gets to work off of a limited api, not an argument as to why WASM is an improvement over JS.
Which is to say that they assume little-endian just like asm.js and modern JavaScript do. Architectures other than ARM and x86 are mostly dead so this isn't generally a huge problem except for the handful of die-hard Amiga users.
More importantly, could Docker be ported to WASM? Because then I could just send the server to the client. Send the whole cluster, run it all in a single threaded browser process. Make the client part of the cloud, sharing in all of the cloudly comforts. /jk
Seems like a lot of complexity for very little gain in performance. Optimizing JavaScript often gain 100x performance, and after that rewriting it to Web-assembly would only gain up to 4x performance.
There is a performance upper bound. You can only run instructions as fast as the machine can. Native code can get very close to this limit, in a way that in practice no interpreted or JITed code can. This is just emulating the techniques native code uses to get close to that upper bound in the same way.
It is not impossible to make pure JS run this fast, but its not going to happen without herculean effort put into JS runtimes. It is just easier to do it this way, and we get to leave JS, which is a benefit to everyone except people who actually like JS.
The larger goal is to provide alternatives to compiling to JavaScript. JS is a relatively large, mobile, and ugly target for a compiler relative to tightly-defined bytecode alternatives like LLVM.
Interesting perspective. I've largely kept out of that fight for many reasons, but it is interesting to see a counter-argument to the Mozilla/Firefox just apes Chrome story.
I agree that Quantum is exciting, but if there's anything interesting about Photon the article you linked really didn't bring it out. What aspects were you thinking of?
In 2016 the global games market had $99.6Bn revenue and a 8.5% YoY. PC had a 27% +4.2% market share, TV 29% +4.5% , mobile 27% +23.7% tablets 10% +4.5% and casual webgames 5% -7.5%. (source:Newzoo)
Google with it's presence in mobile, tablets and web seems to be the winner, edge will support webassembly in the future and mozilla which is much smaller than opera on mobile reminds me of Xerox.
I don't really like the headline. I don't want Mozilla to "win", and I don't think Mozilla wants to either; I want the open web to flourish.
That said, the article is very informative, and well-balanced. It was really good of Google and the other browsers to join the wasm bandwagon. And yes, although as the author himself points out, "proclaiming a "winner" is unimportant or even counterproductive", Mozilla does deserve a lot of credit here.
More importantly, the posts weren't written for HN specifically, but in the context of discussing the open web and WebASM. Within that community this reference will be pretty obvious.
I especially remember that in the first few years of HN there was an expectation that everyone had read Paul Graham's essays and would recognize references to them.
It's a tricky title because out of context (i.e. reading the title on HN without clicking through to the content) it sounds a little puerile, but upon reading the article, the title fits very well.
The general message being: "Mozilla are being very diplomatic and restrained, whereas many in their current position would be outright celebratory"
It's worth pointing out that Mozilla can only continue to put pressure on Google and fight for the open web if people continue to use Firefox and support them. Consider switching to Firefox even if you prefer Chrome. Report websites that don't support FF. We are all better off for the existence of Mozilla, and strong viable competition to Chrome and IE.
Even if thousands of developers from HN switch, that would hardly move the needle. Ordinary users just won't care about any of this.
Servo, which I think is the most important software project in the world, is where it will start to change. That's when those of us who may not be directly contributing code into Servo need to come out and do our thing. I still fondly remember the NYT ad and the crop circle. We should do it all over again.
> Even if thousands of developers from HN switch, that would hardly move the needle. Ordinary users just won't care about any of this.
Thousands of developers from HN make websites. If they switch to Firefox, at the very least, the websites they make will support Firefox. This won't necessarily make Mozilla commercially viable, but if people are really that concerned about it, they can donate to the Mozilla Foundation.
I am horrified by the number of developers who only test in Chrome. At my last gig I found glaring UI bugs (whole menus not responding to mouse clicks, for ex) in FF. Sad.
At my job I work in Firefox, a coworker works in Chrome, and a third person works in Safari (to be honest we don't bother checking the site in any Microsoft browser). That way we are able to do cross browser testing fairly easily.
Not really. Assertions like yours are a moral assertion that we should ignore the moral points at issue and instead favor some unspecified pseudo-business-y ones.
But even taken on business terms, you're sweeping a lot under the rug. As developers and entrepreneurs, we've benefited hugely from the web being an open, competitively specified platform. The more one large company can control the platform, the more it will get tilted toward that company and away from the rest of us.
That may not be bad for any given business next week; these things take time. But for anybody building a serious business, you're going to have to worry about the long-term, large-scale stuff. Google's been going 20+ years; Microsoft and Apple, 40+; IBM, 100+. They didn't get there by only thinking about the next quarter, and you won't either.
if using Chrome meant you needed to step on three kittens a day, I think I would agree with you.
but it's just browser preference, so the whole "moral" thing factors in less than whatever logo is printed on the pen I take from the junk drawer. I just want a pen that works.
One of those words that is often a tell is "just". That's where people sweep a lot of things under the rug. Including here, where you've hidden the fact that you made an unsupported assertion that assumes an answer to the question we're discussing.
I'll note that it's a different bad argument, one about consumer choice, than the one I was addressing, which was about business choices. But consumer choices too always have implications. That's why, e.g., boycotts are a thing: small decisions add up.
I'm going to have to agree with 2bitencryption here.
Everything is a "moral choice" when the person demanding the choice feels strongly about it, but that typically means you just lack perspective.
At the end of the day we're talking about browsers and websites, and while people may not LIKE it, when a business writes software it's a business decision as to whether or not they'll target all browsers or a subset.
By all means, keep on asserting things without demonstrating them and ignoring arguments and examples to the contrary. It doesn't actually convince, but I'm sure it makes you feel better.
Firefox got big in large part because they had good developer tools long before anyone else did. I worked several places where management would say things like, "don't waste time, we only need this to work in IE". The developers would nod and go right back to creating in Mozilla and then fixing it in IE after, because it was faster.
In that way there was a quiet revolution toward cross browser support.
I can confirm this, everyone I knew at the time was coding on Firefox even if no-one required any compatibility with it just for that reason, it was just much easier to code with.
When I shared my observations with coworkers, they would nod and say they had experienced the same thing. Same with peers I knew outside of work. Either we were in a very large bubble, or that was happening everywhere. And I think the rise of Mozilla aligns with those observations. It 'just worked' because everyone quietly made sure it did, even when people told them not to.
Our business decision is that Firefox needs to be supported as well. The fact that most of the developers use Chrome as their daily driver, however, results in a lot of bugs being seen and caught early (or at all) there.
This is just flat inaccurate. Given the GP comments' premise of optimizing only for the business's direct interests, the expected value of your contribution against monoculture is so negligible that it won't balance out changing damn near any habit that you had already chosen. It's a pretty basic collective action problem; if you're optimizing for yourself and your business alone, ignoring the wider picture is still the optimal decision.
The actual argument against (which others are making and which I'm sympathetic to) is that one shouldn't optimize only for direct bottom-line business interests, that businesses and people have a social responsibility, etc etc.
But that's entirely different from what you're talking about.
That's it. I didn't say we should optimize for direct bottom-line business interests. I said IT IS A BUSINESS DECISION.
It is not the decision of the developers unless the BUSINESS GIVES THEM THE ABILITY TO CHOOSE.
And even in THAT, it's a business decision.
That's all I said. The business that pays for the labor and chooses the direction they go in.
This idea that a business targetting a specific browser is some horrible social problem is silly. If I'm making a product that's meant to sit in a kiosk running Chrome OS, I'm sure as shit not going to pay for FF and Edge support. If I get it on accident, fine, but if something breaks in FF I'm not putting any effort in fixing it.
Eh, I'd rather not force my views onto people, against their interests, without their knowledge or consent. That strikes me as pretty dishonest.
I use both Firefox and Chrome pretty regularly, but I'm under no illusions that the quality of Firefox isn't quite a bit lower in multiple very concrete ways for my day-to-day usage (presumably because Mozilla has less resources than Google).
In the past, I have switched family members to a _better_ browser, but I'm talking about IE6 to Firefox, the usability gap between which was 1) in favor of the switched-to browser and 2) waaaayyy bigger than Firefox v Chrome in 2017. Even then, if I had to make that same decision again today, I would probably first convince the person whose computer I was modifying. Especially for non-technical users, having things suddenly change out from under you can be really jarring in an environment that's already pretty confusing.
Given the original posters perspective, Servo is certainly te least concern to change the tides. For an end user it makes absolutely no difference if Mozilla reloaded will be written in C++, OCaml, Rust, wathever. Features, performance, security and speed of evolvement do count.
Servo's WebRender sub-project has amazing performance (GPU accelerated compositing and rendering), and is scheduled to be merged in Firefox as fast as possible.
Stylo (Servo's style subsystem) has already landed in Firefox behind a preference flag (not everything is wired at this point), and it also improves perf.
So, yes Servo is of paramount importance for Mozilla's future, because it does make a difference to end users.
Another advantage of Rust is that it allows devs to avoid a whole range of bugs making it easier for them to iterate and ship updates without introducing new tricky bugs (race conditions can be hard to debug).
While dynamic languages may be nice when you want to explore a problem space, the stronger/most static type system is a benefit for upgrading large, mature code bases...
> Stylo (Servo's style subsystem) has already landed in Firefox behind a preference flag
Stylo is still a compile-time option, but will soon be built by default and controlled by an about:config flag. You can watch the progress to build by default in this Firefox bug:
> Servo's WebRender sub-project has amazing performance
Have you gotten amazing results on your machine? I ask because I think Servo, Rust, WebRender are awesome and I'm rooting for them, but the performance has not been great when I try Servo or WebRender in Firefox on multiple machines. Maybe it's just the machines I've tried on though.
And impact wise fast default on content blocking is the feature that gives the biggest bang for your buck. It makes for a nice, fast and more secure browsing experience, plus it is a differentiation Chrome might be reluctant to enable (without crippling it too much).
Browser benchmarks should include how long does it take to watch a 30 sec youtube video from application start to finish or how much 3rd party feature/bloat/mal/adware it downloads connecting to $major_site.
As the lead dev of VLC in a recent interview said, they’ve been offered huge amounts of money to include Google Chrome in their installer, and saying no was the hardest decision he’s ever made.
As long as Google has fraudulent ads for Chrome "your browser is outdated, update now to Google Chrome" on their websites, as long as Google intentionally makes the experience worse for Firefox (see the youtube redesign), as long as Google pays developers to ship Chrome as malware with every single installer, as long as Google forces OEMs to install Chrome with Android, Chrome will rule the market.
MS was a much bigger impenetrable monopoly and the Web was won back. It can be done again. Having a great product and grassroots evangelism certainly help.
(Not that I think Chrome is THE ENEMY. It's constantly evolving, multiplatform and open source. IE was none of that. But I agree Google's practices you described are despicable. Huge kudos to VLC for doing the right thing)
With IE, Microsoft was influencing what the web was viewed with via its control of the client - Windows.
Google is influencing what the web is viewed with by simply being such a key part of the web itself, and using its weight from that direction instead.
(One could argue that they have Android for the client, but as a percentage of web users it's still far from what Windows had in the IE6 days.)
Firefox won back the web by having a great product, grassroots evangelism, and a Microsoft who badly neglected their competing product for many years. They left an opening.
Chrome might be losing its lustre but it's certainly not being treated the same way. I think Firefox's new battle for market share might be harder than it was vs IE, simply because Google is still so active on this front. In response, the only real new thing in our arsenal is hindsight, which I guess is what the VLC example above is a result of.
> With IE, Microsoft was influencing what the web was viewed with
Microsoft was trying (and succeeding) for the web not to be a preferable API to Win32, so as to keep that way a high barrier to entry into the OS business. Which is why IE was squarely against standards.
> MS was a much bigger impenetrable monopoly and the Web was won back. It can be done again.
The Web won because a) its introduction was a one-time technological change whose social impact was on the order of the printing press, and b) Microsoft had gotten fat and lazy on their monopoly revenues from Windows and Office.
We can't get cocky here. It is perfectly plausible that HTML will still render 500 years from now. And I'm dead certain that it will still dominate 20 years from now.
Growing up in a major technical revolution, it's easy for us to assume that the future will have a lot of technical revolutions that will keep knocking monopolists, rentiers, and authoritarians off their perches. And if that happens, great. But we should really be planning for the opposite case.
Internet Explorer wasn't really multi-platform. The Mac version used a different rendering engine (Tasman as opposed to Trident) and it of course didn't have support for ActiveX extensions.
For a long time being a 90s Mac user sucked as different websites required you to use IE on a Windows PC.
Mobile is where the war is being fought. A better browser can ship with hundreds of millions of mobile devices. Firefox-next with ad blocking and better support for parallelism is just what phones need. Users will use what came with the phone.
It's the default browser I'm using right now on Android. Performance is still not on par with Chrome but it did improve since the last time I tried it a few months ago. The ability to install extensions surpasses anything else for me so I'm sticking with FF.
And saves battery and data. The absence of plugins on ios/safari and android/chrome is a severe deficiency. It is also interesting that adblocking firewalls get denied by the app stores...
It doesn't has text reflow after zooming, which Opera has. For all the other features I don't see anything horrible. It used to be slow at drawing pages but it's not anymore. I've been using it since Opera changed hands and it's nice to have uBlock on the phone too.
I've been using Brave[1] on my Android phone for the past two months. It's been brilliant. A much better experience than Firefox, which was a bit slow and some pages loaded as a white screen.
I'd love to use Brave on my desktop too, but their lack of plugin/extension/add on support cripples it a little. There's a couple I just can't live without. Using Iridium[2] on the desktop instead.
How do you get Firefox on devices? If OEMs want to use the Google Play store on any single of their devices, they have to make Chrome the default browser on all of them.
> In practice this means that the position of packages in node_modules is computed internally in Yarn, which causes Yarn to be non-deterministic between people using different versions.
I believe Chrome has to be installed, but it clearly doesn't have to be the default. Just see all the Samsung phones with the default Samsung Internet Browser as evidence.
Do you have a link to the interview? And how is the YouTube experience worse on Firefox? I didn't know of Google's practices except for the banner in other browsers.
> "To be honest we’ve been offered some insane amounts of money to do bad stuff around VLC, like shipping tool bars at the same time of the installer of VLC or or installing other software like Google Chrome and so on. And when you see the numbers they propose to you, you’re just like: How the fuck am I going to say no to that?"
> "The thing is, it’s not only my project so I’m not allowed to do that. [It’s the] legacy of other people. That wouldn’t be moral."
The European Commission has successfully acted against Microsoft in the past and has repeatedly tried to act against Google on various antitrust concerns.
I think, I could be wrong but I think he/she is referring to what the European Union did with Microsoft and asked them to include a way that asks new Windows users to select a default browser and offer them choices besides Internet explorer.
Perhaps he means a similar thing to what they did to Microsoft by forcing them to make it possible to choose a different browser when installing Windows.
It heavily makes heavy use of parallelism and the GPU which increases the responsiveness. It's also uses Rust as a programming language instead of C++ which will prevent a whole category of bugs like buffer overflows and use after free.
Demonstrated a successful code execution attack against Safari to gain root privileges using an use-after-free vulnerability in Safari and an out-of-bounds vulnerability in Mac OS X.
Demonstrated a successful code execution attack against Microsoft Edge in the SYSTEM context using an uninitialized stack variable vulnerability in Microsoft Edge
Demonstrated a successful code execution vulnerability against Microsoft Edge in the SYSTEM context using an out-of-bounds vulnerability in Microsoft Edge and a buffer overflow vulnerability in the Kernel.
etc. Highlights mine. All of these are prevented in safe Rust.
Do you have any evidence that's why users are switching away?
My general belief is that general-audience users don't care at all about bugs like those. Which is why we have so very many of them, and have for decades.
Rust's fearless concurrency[0] allows devs to write and refactor performant parallel code without the risk of introducing bugs, enabling them to ship upgrades faster.
It's the same benefit you get from strong, static types in a large project vs one with dynamic/weak types, but for another category of bugs. In large projects, it makes a difference.
I switched away from Firefox because at the time it was bloated, buggy, and slow. I haven't switched back because I am more familiar with Google's developer tools.
I can tell you exactly why I'm not using FF: due to extremely poor support for switching between multiple profiles. Oh, it's possible, but compared to Google's seamless support for the same, its very awkward, even if you install an extension (which is only available from a third party).
I think it's important to scale back expectations for Servo. Given that Servo itself won't be ready to be used as a main engine for quite some time do to insufficient site compatibility.
Project Quantum will use only pieces of Servo coupled along with Gecko. Maybe Mozilla will fully replace all of the old single threaded code at some point in the future but I imagine that's a ways off.
Given the gradual progress of these changes it's likely that any successes in performance it brings will be copied in the other engines before they are too far ahead.
I don't follow these things pretty closely but why is chrome so much more popular than firefox these days amongst non-technical users?
I use Firefox as my main browser but my girlfriend uses chrome on her computers so I get to use it from time to time. I don't notice any major differences, the extensions I care about are available on both browsers, the speed is not noticeably different etc... On top of that Firefox predates chrome so it's not like people not switching away from IE because they didn't know better. So what happened exactly? Is there some chrome killer feature that I just happen not to use myself?
There are people who actually do that because otherwise they get an inferior version of a website even though it works perfectly fine on Firefox as well.
Do you remember when chrome dropped? Gosh it was so blazing fast in comparison to everything else. I think that initial period is why so many regular people use chrome. It hasn't gotten to the point to look anywhere else yet, at least in my opinion. I think the other thing that pushed it this far is just the fact that it's new.
> Gosh it was so blazing fast in comparison to everything else.
It was dead slow, actually, unless you were running a modern (i.e. very fast) multicore PC and with a very small number of tabs. For example, when trying Chrome out soon after release, it managed to grind my PC to a halt because I dared open something like 5 tabs (whereas Opera was happily running double to triple digits). Back in 2008 multicore CPUs weren't as widespread as they are today, so for the most common cases Chrome was just slow, context switching PCs into the ground.
> I think that initial period is why so many regular people use chrome.
Regular people don't simply install new browsers, it's the people familiar with computers (like the family geek, or the guy maintaining PCs for a living) who push them unto regular people. Anecdotally it went something like this: technically inclined people were supporting Firefox (because it wasn't IE and because of A LOT of marketing) despite it being a crappy browser and there being better alternatives. Now, Firefox was not slow per se, but if definitely FELT slow, so when Chrome was launched the same people who had popularized Firefox started promoting the new shiny trinket. Everybody kept saying "it's so fast!" - well, it certainly FELT faster than Firefox, at least when it came to the UI, and that was enough to switch.
> I think the other thing that pushed it this far is just the fact that it's new.
I think so too. Shiny new things have the side effect of attracting the enthusiastic bandwagon jumping types, and enthusiasm can be contagious.
> It was dead slow, actually, unless you were running a modern (i.e. very fast) multicore PC and with a very small number of tabs.
I totally dispute that. I had a very modest PC at the time and I remember vividly using Chrome for the first time, noticing how much better it performed compared to firefox, especially if you had many tabs open.
I remember other people telling me it was much faster. I don't remember it actually being much faster, but I switched anyway because of horrific memory leak problems with firefox.
> noticing how much better it performed compared to firefox
Given that I used Opera's performance as a basis for my statement I guess your comment, instead of being a glowing praise of Chrome, simply reflects very poorly on Firefox.
At the end of the day, Chrome ended up using too many system resources to be a viable option for me at the time. People accused me on occasion of being an Opera fanboy, but, objectively, it was hard to justify why Chrome would need more resources than Opera while delivering significantly less features.
The same thing happened when Phoenix dropped. Over the years though it packed on the pounds though and started to look more like Navigator. Chrome's waist line has been expanding as well.
I don't have a good answer to your question, but for me it was actually performance. And this is particularly baffling, since Firefox seems to do extremely well in all benchmarks, often beating out Chrome.
But, as a quick test, I closed Chrome, and it was up again instantly (say 500ms). I did the same with Firefox (after a generous warmup / caching session), and got 4 to 6 seconds each time. Clicking links and page loading feels similar; on Chrome I don't notice it, on Firefox I always do. Am I the only one that feels this way?
> Clicking links and page loading feels similar; on Chrome I don't notice it, on Firefox I always do.
Because Chrome, by default, enables prefetching [0] (it loads links before you decide to click on them). Firefox will never do that due to obvious privacy concerns.
It's the usual "principle vs convenience" thing, where most people choose the latter.
> Link prefetching is when a webpage hints to the browser that certain pages are likely to be visited, so the browser downloads them immediately so they can be displayed immediately when the user requests it. This preference controls whether link prefetching is enabled.
> Possible values and their effects
> true
> Enable link prefetching. (Default)
You can check the value in your Firefox with about:config and searching for network.prefetch-next
My one is "false" and it's shown in bold face, meaning that I changed it to that value. Privacy concerns and also legal concerns: what if a site links another site that the legislation of your country (or the country you're travelling to) doesn't allow you to access? At least I get some hints of where I'm heading to if I'm loading pages myself.
How does prefetching affect things like the not-yet-viewed website's stats? Since it is a Google browser and Google Analytics is so dominate, is it to Google's advantage to do this for reasons other than convenience to the user? (Inflated stats for the prefetched website even though the user never viewed it?)
Nope. I really want to like Firefox but its performance is just so much worse across the board than Chrome.
Another example is video playback: Firefox, when viewing video, heats up my laptop to the point where the fans kick in at full blast. Chrome stays nice and cool on the same material. The difference in battery life is noticeable too.
For VP8 and VP9, Firefox and Chrome use the same video decoding backends, so it's likely not video decoding that is causing the problem. For H.264, Firefox uses the OS-provided codecs; I'm not sure what Chrome does, but in any case that's the only possible solution for Firefox due to patent issues.
If it's making that much of a difference in heat, it's probably using a software renderer instead of passing the decoding to the GPU. Check (Menu)->Help->Troubleshooting Information and see if it says "Supports Hardware H264 Decoding: No;". If you can't fix it, I recommend using an extension that adds video URLs to your VLC playlist. VLC supports playing YouTube URLs natively.
I'm on Linux and I don't feel any noticeable speed difference using FF vs Chrome (startup time doesn't bother me at all, my browser is always running and I rarely restart my machine). I've been using FF since the early days and never found enough reason to switch to any other browser and I'm quiet happy with the improvements that have been coming to FF lately and what the future holds.
I think for me it's specifically UI performance. Opening a new tab in Chrome is instantaneous. In Firefox there's a slight but noticeable stutter that annoys me just enough to avoid using it.
Session restore can make firefox take a lot longer to start. It's possible your profile has a very old session with a lot of tab history. Maybe try unpinning and closing all your tabs (there should be a 'bookmark all tabs' you can use to get them back later), and then exit firefox. This should prune all that tab data and history (session restore saves a pretty considerable amount of back and cookie history for each tab.)
> but why is chrome so much more popular than firefox these days amongst non-technical users?
Google aggressively pushed Chrome on web users.
Banner messages claiming the users browser was out of date (it wasn't, they just weren't using Chrome), or that this website works better on Chrome.
Redesigning their web services to be coincidentally worse on Firefox.
The numerous and excessive ways it was bundled with various other application installers. Most users don't customise and application install, they just stick with the defaults and the result was, they ended up with Chrome as the default browser.
Whilst Microsoft eventually started advertising Explorer as a response, Chrome was the first browser I'd ever seen advertised on billboard posters (liberally plastered all over London at least) and in non-technical magazines and newspapers.
I can't speak for anyone else, but for me personally, the deep integration with Google is a feature for me. Almost everything I do is on Google (email, domains, drive, Cell Carrier), so having the deep integration with Google on Chrome makes my life that much more convenient.
I got into chrome as a teenager, though, when my dad switched from firefox to chrome due to Chrome being apparently faster than firefox at its debut.
Firefox worked to close all memory leaks a few years ago. New ones are treated as bugs and fixed quickly. They even got aggressive about limiting memory that addons use. You can check about:performance to keep an eye on them.
> Consider switching to Firefox even if you prefer Chrome.
I have given Firefox plenty of chances. It's just too slow: webpages load slower, and when they are loaded interactions feel awful (low FPS on large webapps).
I haven't noticed this myself, and I use both extensively on a daily basis. I currently only use Chrome for work because of Google Meet. Drives me batty that I don't have a choice there.
I have the opposite problem somehow. Chrome is incredibly slow and pages take dozens of seconds to load. Firefox feels so fast and responsive. I think the issue is due to adblock though, which I keep off of firefox.
You pretty much sum things up as they were for me half a year ago. I'm not happy about them putting the sword of Damocles over Vimperator and Pentadactyl though, which is why I'm mainly using Qutebrowser on the desktop now.
That said, Mozilla is still awesome and FF will most likely remain my second go-to browser.
One other thing I stumbled upon, the responsive design mode in Chrome was not working correctly. I was pulling my hair out when developing with it. I switched over to Firefox in their responsive mode and it worked correctly and reflected what I saw on an actual phone.
What has always frustrated me most about Chrome is that certain bugs or fixes can take an eternity to fix despite there be many reports on an issue.
You make it sound like Mozilla is somehow fighting against Google for the open web. But virtually all browser vendors are aligned on this issue, and have been working towards improving the web through standardization, improved performance, and functionality for years now.
The point of the article is that while Google is mostly aligned on the issue, Google also does things like creating PNaCl, which is not an aligned behavior.
(P)NaCl was introduced to replace NPAPI, at a time when the majority of rich web interaction still happened through plugins.
I'm glad to see it killed off, and have been expecting it for a few years now. JS/asm and (hopefully soon) WebAssembly have supplanted many of its features and benefits.
Still, this doesn't strike me as anti-open web. Google offered a solution when one was needed - it didn't gain traction, so they eventually retired it.
Google introduced PNaCl and made it available as a feature any Web page could use, knowing that there was effectively zero chance it would ever become a cross-browser Web standard. That was anti-open-Web.
And yet I can still only use Google Meet in Chrome. And many "standards" are only partially implemented in other browsers, or require browser-specific prefixes.
I'd also go so far as to say that Google's AMP project, as implemented, displays a distinct step away from an open web.
I use FF, although Mozilla focuses on thousand of other things instead of making FF a great browser.
As a developer I'm annoyed every time I enter "com.scala.List" in the address bar and FF does not use google to search but thinks this is an url. No, "List" is not a TLD and no that website does not exist.
As a developer I'm annoyed every time I enter "defined.in.hosts.file" in the address bar and Chrome does not believe this is a url but performs a search instead. Yes, I can define anything as a valid domain in hosts and yes that website exists on my computer.
It does a DNS lookup and if it's a valid domain shows you this. If you see it for complete nonsense, then it's possible your ISP is doing DNS Hijacking like mine!
Eg. if I type "cheese", it shows "Did you mean to go to http://cheese/"? If I click that link I get TalkTalk's "Error Replacement Service" full of ads (or at least I did, till I switched to Google DNS because TalkTalk's "opt-out" system has been conveniently broken for years)!
Ah, that explains that. I always found that feature annoying, I didn't know it was because my ISP misbehaving. I always get a century link search page when I typo urls.
The new Google "did you mean to go to ?" nonsense is something else to add to that link!
My ISP (TalkTalk) claims to have an opt-out page but the forums suggest it's been broken for years, and today it is a 404. I have an open issue with the CEOs office to opt me out manually but they've been pretty useless so far.
If you add a slash to the end, it'll always treat it as a domain. eg. "cheese/".
I did see someone from Google ask if it'd be useful if after the first time, when Google knows it's a valid domain, it should just go there directly (even without the slash). Everyone said yes, but it doesn't seem like it was ever implemented!
I feel exactly the other way about Chrome; whenever I need to type in a test site URL, it thinks it's a search query, and I need to go back and stick http:// in front.
Does the 'g' prefix do anything? On my Firefox it still searches for 'g com.scala.List' in Duck Duck Go, my default search engine. When I run that on Chrome it searches for 'com.scala.List' on Github because I've chosen g as a search leader for Github (which is super convinient and I wish I could do that on Firefox).
Why should Firefox work like Chrome? They're two separate products, and Firefox offers an explicit search bar for disambiguating searches from addresses. Especially important since 'List' could indeed be a TLD in the near future.
The point of these commands (invented by Opera by the way) is to give you choice what search engine to use while not sacrificing your performance. Google's approach is different: remove choices that might confuse or distract you. Choose what you like.
I agree. I dont know where Mozilla is spending their money, but they are years behind in regards to security enhancements in comparison with Chrome, Edge, and IE11. Around IE7 nivea. Still waiting for 64 bit Firefox with Sandbox and per-tab-process and CFI.
I thought that's what Electrolysis was. We've switched to an ESR release with e10s disabled because of an incompatible add-on listed as compatible. Annoying, because the idea is if an add-on doesn't work with e10s, e10s will be automatically disabled. And of course in this case it doesn't since the add-on works "great" with it!
I believe the main thing Electrolysis does is split the UI and rendering into separate processes. It also creates separate processes for some other tasks. But as far as I know it doesn't give each tab a separate process.
Unfortunately it's not as easy as that. My web development is hugely dependent upon chrome's browser debugging. It's made my life a lot easier being a JavaScript dev. I'm sure Firefox has made strides but does it offer something better than Chrome? Probably not, there is a reason most web development shifted to Chrome in the first place.
It's true. I've been using firefox again for about a month now. It's pretty decent, but definitely lags behind. Stack trace errors in the console don't respect source maps. A 3D scene ive been working on has been slow to load/refresh, where it's a non issue in chrome. I'm sticking with it for now, as I don't want to be all in on one company, but it's hard at times.
I'll try Firefox after they complete the switch to Project Quantum. It will be a very serious competitor, especially on mobile phones, where using multiple processors and GPU rendering matters even more. But I'll switch because it will be safer and faster, not because of ideological reasons.
At my office, there is a proxy to access internet. When a site is blocked by blue coat filter, I launch putty to open a tunnel to my webserver and I launch firefox portable using my tunnel as a proxy. I used to do it using chrome, but the enterprise has set a policy to block configuration of chrome proxy (like internet explorer). firefox is the only browser that allows uncensored (and private) internet access in my office.
At my house (ubuntu gnome), windows firefox on wine is the only way I have found to access some webTV based on flash.
It may not be sufficient to just switch to FF. I think it likely that what FF needs is people putting in the grunt work of continuing to optimize the hell out of it so that it's performance-competitive with Chrome.
Users will choose the fastest browser that works, in general.
I'm more interested in the elusive-but-oh-so-important "Feels faster" metric, which is harder to capture in benchmarks.
To my taste, Chrome juuuust edges out at a cursory glance, but barely (though at this point, it's a little sticky for me because it has my Google account credentials, my bookmarks backed up to the cloud, etc., etc.). But it's definitely looking better than it did when last I tried that comparison.
No, I won't support Mozilla. I used Firefox for a long time, but then they decided to drop their extension capabilities for something very similar to Chrome's. Why wouldn't I just use chrome?
Second, the zoom functionality in Firefox is broken. On a 4k display, pages just get jacked up after doing Ctrl-+ a few too many times. Chrome's zoom is far superior. People have told me, "just change the default pixels per inch" nonsense. No, Firefox's zoom is just broken.
Lastly, after the SJW witch hunt and ousting of Brendan Eich, I don't care what happens to Mozilla.
WebAssembly and PNaCL always seemed like a hack. A very elegant thought-out hack, but still a hack. It required a 64 bit OS but could only ever operate in 32 bit space.
Should have been dead on arrival because of that fact alone. For something that was trying to bring C/C++ and video games into a browser setting a 4GB max on memory should have been a non-starter.
WebAssembly is unrelated to PNaCl and has no limitations on the bit width of the host OS or the running program. PNaCl has those limitations because of how its sandbox works, but WebAssembly does not use a sandbox at all- it uses Javascript's security model instead.
(The WebAssembly MVP only supports the wasm32 target, but wasm64 is also planned.)
I'm afraid pretty much every part in the pipeline of getting websites to your computer and shown on screen is a big giant hack of which it's a wonder that it even works at all, most of the time.
Nobody lost. We all "won", because, for the time being, Chrome is important for Google: They need to keep the open web as a platform competitive with Apple's iOS ecosystem, as well as Facebook's walled garden, because Google rules advertising in the first, but not in the latter two.
Thus, Google's interests align with the users', and also Mozilla's. It's perfectly possible for this to change when, for example, it becomes lucrative for Google to move people from the open web to android apps. At that point, we should all hope to have Mozilla and others like it still around.
It shouldn't be surprising that Google can outcompete any other organisation when it sees something as relevant to the absolute core of its business.
In terms of usability it's already on a par, it's just a question of waiting for average users to have a strong enough reason to switch. Potential speed improvements from Project Quantum seem like the best hope in the short term.
Yeas, what about it? Oh right, it might stop working after Firefox 57: https://github.com/piroor/treestyletab/issues/1224 . There are people working on it but that isn't true for many other extensions that are no longer under active development.
What is the point of using Chrome if you can't have something like Tree Style Tabs? Unless you're a beta user I don't see the point. I personally will stop using newer versions of Firefox if they stop supporting tabs on the side.
WebAssembly will finally put an end to the constant battle between browser vendors as to who has the fastest Javascript Engine. The endless benchmarks about how V8 is faster, or Chakra is faster, or SpiderMonkey is faster, and so on and so on. Hundreds of thousands of millions of man hours have been poured into building the fastest JS parsers imaginable, and now WebAssembly is going to come along and side step the whole thing in one go by moving the parsing stage off of the client.
Javascript parsing technology will go down in history as ultimately the biggest waste of time that mankind ever indulged in, all because no one stood up and questioned if this language was even a good fit in the first place. In a few years the WebAssembly creators may even win the Noble Peace Prize for finally ending the biggest battle in the never ending "browser war".
> Javascript parsing technology will go down in history as ultimately the biggest waste of time that mankind ever indulged in
If there is absolutely nothing else in this world that you can think of that might be a bigger waste of time than performance optimizations that have benefited billions of users for many years, you're not trying very hard.
Maybe in a very distant future. In short term most webcontent will still be delivered as Javascript (maybe transpiled from TS & co). WASM will cover some niches, e.g. for running some native applications and games directly in the browser. Or for making some high-performance low-level libraries (which might have a JS API on top).
Besides that WASM is also executed by a Javascript Engine, so it's speed still matters.
> Javascript parsing technology will go down in history as ultimately the biggest waste of time that mankind ever indulged in, all because no one stood up and questioned if this language was even a good fit in the first place.
Javascript, in its current ES2016 version, is a pretty good and powerful language. ES2015 is also fine.
The problem is that most JS code out there has been done in the older versions of Javascript (pre-ES2015), which is really a terrible language (doesn't even have clean and clear variable scoping!).
The elephant in the room is download size. A wasm photoshop, even if it works and performs well, is still a multi-gigabyte "web page". The browser is in no way set up to handle that.
Even simple things will be huge compared to javascript webpage. Let's say you write your todo app in Python with Qt bindings. Sure, wasm lets you run it on the web. You'll just have to ship Qt, the python interpreter, low-level graphics rendering code, shims for system calls in the standard library ... overall you'll end up way, way heavier than the javascript + DOM version. Probably 100MB or something.
wasm as it's implemented is going to have a very narrow band of usefulness. Basically, isolated computational modules (e.g. a physics simulation), and games. The games won't be, like, "Call of Duty on the web", they'll be little flash-type games with not too many assets. Creators will have to put a lot of effort into asset loading and compression to get around browser limitations.