Hacker News new | past | comments | ask | show | jobs | submit login
Prime Video Uses WebAssembly (amazon.science)
428 points by pacificat0r on Jan 27, 2022 | hide | past | favorite | 247 comments



The egui framework they mention is pretty neat:

https://emilk.github.io/egui/

It an entirely custom toolkit, so don't expect it to have a native look and feel, but it's a GPU-first design with multiple back-ends. It can be used in native OpenGL apps too. It's an immediate-mode UI, so it's very easy to build and update even complex windows. Great choice if you want to prototype a game.


The biggest downside of this is that everything is painted as "graphics", and you can no longer use the web inspector to modify the fonts, colours, etc.

While few people (like myself) do this directly, lots of people use it indirectly through extensions like Adblock, Stylus, etc.

For example, I find the fonts hard to read due to the colour choices; then I zoom in and it's better, but now this "Widget gallery" no longer fits on my screen and there is no scrollbar.


Yes, I really hope this doesn't become common. By not utilising the DOM, you're killing accessibility, autofill, customization (things like reader mode, automatic dark themes...)... These are things that regular people rely on every day (not just us techies writing userscript a and Stylus themes).


It will definitly be the new common, by killing Flash et all without comparable tooling, while at the same time offering WASM, it was only a matter of time until we had the revenge of plugins.

Basically 10 years wasting time to come full circle.

https://leaningtech.com/cheerpj

https://leaningtech.com/cheerpx-for-flash/

https://opensilver.net/

All of the three major ones are now back, but it is ok, WASM is great!


It's not really a "WASM problem", as such; you can use the DOM from WASM and that works quite well. Similarly, you can do all this kind of graphics drawing with just JS as well.

It's just a choice this framework made. I bet there are actually some real advantages to it, but also disadvantages. I very very much like the customizability of the web, so I'd be very surprised if the advantages would outweigh the problems.

In short, I don't think it's really the same as Flash.


Concur. I block wasm and webgl, mostly for security and privacy, so don't use sites that require it. Unfortunately, as I also block trackers, most site operators won't even realize people like me visit their sites.


Does blocking WASM improve privacy? As I understand it, it essentially has the same capabilities as regular JS(?) but I never looked very deeply in to it, so maybe there's some parts I don't know about(?)


Theoretically, wasm shouldn't have more access, though I would be moderately concerned about bugs, and better access for timing and rowhammer.

Practically, it's mostly used for tracking and making users' machines execute even more inscrutable code. IMO, if you need wasm for |"normal" pages and apps, you're doing something wrong. There are notable exceptions (say machine emulators, maybe 3d games and such), but they are spread far between most pages.


It's ridiculous to me to even call these things web browsers anymore. A more realistic term is any-type-of-application-cross-platform engine. But that's not really concise. Why did the web have to become the primary development platform for nearly everything, with WebAssembly, WebGL, notifications, etc?

I, for one, am a fan of the fact that Gemini is growing in adoption (at least as far as I can tell). Such a nice, stout protocol.


The good thing is this new cross-platform runtime call the web browser has much better sandboxing and permissions models. Not perfect, but better than before, and can be improved. Something like uMatrix with granular comprehensive per site permissions to enable all the settings such as e.g. wasm is needed. Does anyone know of such?


It feels like a huge step backward in terms of all of that. It's what Flash used to be, except less blatant security risks I guess.


> less blatant security risks

For now. Flash was also fine in the early days.


Just look at the growth in the number of Adobe employees between 1990 and 2019 at this link :

https://dazeinfo.com/2019/11/08/number-of-adobe-employees-wo...

if headcount was any way to measure success, then the killing of Flash did Adobe the world of good.

edit : not corollary, but the capacity to manage Flash security might have arrived too late, if I recall that late 2000s sudden Acrobat hegemony that was so absolute a stranglehold on corporate document cultural defaults they were uncaring about opt out only browser extension auto installation and more dark patterns than I care to remember. Unless the dark patterns secured that growth. Ugh. One of the primary factors in my purchase of a top model iPhone for my first Apple product that wasn't Snow White era was the ability to print to PDF (edit removed anecdote accidentally aggregated)


I mean I would just argue that the accessibility APIs are currently tied to the DOM and we should fix that.


Accessibility API need a semantic layer that the DOM provide. You can't just make every arbitratry gui accessible. The app developers will have to pay the price (more development time).


I’m only familiar with Flutter which just happens to have a render to Canvas option on the web but they have a full accessibility tree that is designed for exactly that already with zero work from the developer.


Hopefully the GUI developers realize this is needed and put in the work, so that work can be shared by the end users of the GUI frameworks. I'd hate to think developers are going to reinvent the wheel on each project, but I've been less surprised before.


> extensions like Adblock

I suspect that if Amazon have considered this, they certainly didn't see it as a disadvantage!


> we built an application that overlays debugger information on an application scene render using egui, a Rust GUI library

This is the context. They use egui to display debugger information - performance does not matter for the debugger information.


It does if you're debugging performance issues.


If you need good performance to debug "performance issues" - lack of good performance - I don't think you will solve your problem.


I'm not.sure what you're saying. You need good performance in your debugging tools to debug performance in your main app, otherwise when you enable the debugging tools they will dominate any measurements.


It can be very difficult to replicate a data race when using shared memory without decent performance when debugging.


You can easily tag imgui update/render times in profiling sessions. Then they are isolated from the rest of the work that's happening.


Checked demo. Bugs when interacting. For example when clicking buttons on top does not switch to supposed part. just flushes screen momentarily. Clicking refresh on browser right after that fixes the situation.

Major pita is keyboard handling when selecting, cutting, pasting text in edit controls and markup editor


Can confirm this lib is very good. I use it to simplify my status app, it allows me to keep everything in Rust so that I don't need to know too much about react and other web tech.

There's demo apps that you can just grab and edit to your liking, and the docs explain what you need to build the WASM app and run it.


someone made a tutorial to make a web app in rust with egui https://www.youtube.com/watch?v=4MKcqR9z8AU


Immediate mode is such an outdated approach for a modern UI framework. It's a deal-breaker imo.


Immediate vs retained is not about "modern" vs "outdated", both are about as old as computer graphics themselves. One is better in some cases, the other in others.

In any case, the fashion (for the last ~10 years) is to consider immediate mode the modern approach, and retained mode an awkward practice of the 90s. So you can call "immediate mode" a lot, but you can't call it "outdated" when most consider it the new black...


You're thinking of React and Flutter and so on. They aren't immediate mode. They're more like retained mode with a non-traditional way of keeping the UI state up to date. Very different to egui.


React and Flutter aren't immediate mode only because they're build on top of the DOM as primitive. Otherwise, conceptually they are evolutions of immediate mode.

And they inspired plenty of GUIs outside the web to go "immediate" mode themselves.


Words don't have any meaning anymore if react (and the HTML DOM) is considered anything close to "immediate mode".

For starters the DOM is a declarative thing - immediate mode means that you have a function called for each frame with imperative function calls written by the user of the immediate mode GUI toolkit for rendering each element of the GUI every time (unlike retained mode where a framework hides that behind some object model). It's not really possible to be more opposite to a declarative approach.


Perhaps confusingly, the term "immediate mode GUI" usually describes an API style. NOT an implementation detail.

It suggests a particular implementation, but in practice most nontrivial "immediate mode" GUI libraries (including egui [1] and the famous Dear-IMGUI [2] [3] ) retain some "shadow state" between frames. The existence or scope of that state is a (sometimes-leaky) implementation detail that shouldn't distract from the fact that the API presented is still "immediate mode."

[1] https://github.com/emilk/egui#ids

[2] https://github.com/ocornut/imgui/blob/master/docs/FAQ.md#q-a...

[3] https://github.com/ocornut/imgui/wiki/About-the-IMGUI-paradi...


The key point of the comparison is than in React, as in immediate mode, you tell what you want shown every time (for every "frame") as opposed to having references to and controlling instantiated widget objects (an object graph).

In React's case, you tell it declaratively (your JSX component tree) with some procedural stuff thrown in (JS parts in the JSX). In classical immediate mode, by calling paint functions. But in both cases you tell every time - as opposed to performing actions on a pre-created widget graph. With the caveat that in React's case, behind the scenes, there is a widget graph, the DOM. But that's an implementation detail, as far as the dev writing React is concerned, they re-describe every GUI state.

So, you can say that React is an "immediate mode" abstraction over the retained DOM. In fact that's exactly what devs say about it:

https://twitter.com/ibdknox/status/413363120862535680

That's the inspiration from "immediate mode" that React brought - the GUI intention described by creating a new state, as opposed to manipulating state. And the diff algorithm is also inspired by the diffing algorithms used in immediate mode to draw less for each frame.

That React does this with DOM widgets under the hood as opposed to painting commands, and that those are higher level widgets and not lines and rectangles is not the part of the analogy people emphasize.

retained: user instantiated widgets, and changes their state (e.g. to clicked).

traditional immediate: for every 'tick', user issues commands to draw the new GUI state, usually with some smart diffing under the hood to minimize paint commands.

React immediate: for every tick, user describes the arrangement of widgets, text, etc for the next GUI state, usually with some smart diffing under the hood to minimize DOM changes.

Calling a "draw button" function, as opposed to declaring a "<Button>" in a JSX structure, is not that crucial of a difference, compared to the conceptual change between "manipulating objects" and "telling everything about how the UI should look at each tick".

Heck, a dev doing an "actual" immediate mode could trivially wrap the "draw button" function call to be driven from a declarative file (if you parse the term "button": call draw button, etc). It still wouldn't be retained mode.

And React and co also inspired interest in immediate mode GUIs, and inspired some new actual bona-fide immediate mode, with graphic calls and everything.


Your own comment makes it pretty clear that React is not immediate mode. Not the immediate mode that everyone understands anyway. Your "traditional immediate" and "react immediate" are just terms you made up.

React is clearly a different thing to immediate mode and retained mode. You don't have to cram everything into those two categories. We could do with a word for it though.

I don't think the guy you linked is a React Dev.


>Not the immediate mode that everyone understands anyway. Your "traditional immediate" and "react immediate" are just terms you made up.

The terms, yes. The descriptions and understanging of React (as an immediate mode wrapper over DOM) is shared across the community, all the way to Wikipedia:

"One way to have the flexibility and composability of an immediate mode GUI without the disadvantages of keeping the widget tree only in function calls, with the lack of direct control of how the GUI is drawn in the rendering engine would be to use a virtual widget tree, just like React uses a virtual DOM".

What you're arguing is little details about "how things were always done in immediate land" as this was set in stone.

Well, React and descriptions of the UI such as SwiftJS, also shown that you can apply the immediate UI concept to something other than direct drawing calls - namely, controlling a retained mode UI underneath with the same logic you'd call paint functions (or abstract them to "drawButton" and such).

It's not that you're wrong. It's just that you're right about the trees, not the forrest. It's the concept that matters, not the implementation details -- which is like complaining that "Linux can't be UNIX, it doesn't derive from ancient blessed code". Yeah, but it's still UNIX to everybody - and in fact today's de facto UNIX.

>I don't think the guy you linked is a React Dev.

I didn't say he was. In fact it says up there on the tweet he's doing Aurora, and that in that he was inspired from React and its immediate mode approach (he's the guy behind Lighttable/Aurora/Eve).


React does have an immediate mode style of API, however it's not directly involved in graphics drawing - the actual graphics calls are performed by the browser which is doing retained mode graphics.


Flutter on the web uses the DOM?

I seem to recall discussing here on HN with people from the Flutter team how they didn't use the DOM on purpose.


>Flutter on the web uses the DOM?

Yes. On the web it actually has 2 renderers, a HTML (which uses the DOM) and a Canvas based one (which doesn't).

But I mostly wanted to write about what's the case with React - I just wrote quickly and also kept the reference to Flutter from the parent comment which mentioned them together...


Thanks, I didn't know about the HTML renderer.

Is there any demo online?


Flutter isn't built on top of the DOM. And anyway that's not why they're not immediate mode. Take a look at some egui code and you'll see huge differences, e.g. in how button clicks are handled, when code runs, etc.


>Flutter isn't built on top of the DOM.

The web version has a DOM-based HTML Rendered in addition to a separate Canvas one. So (on the web, where the DOM is applicable anyway, it kinda is).

In any case, I didn't intend to go about Flutter, which I don't use, just wanted to write about React. I mentioned it while answering because the parent comment mentioned them together, and I didn't paid much attention.


Is the font rendering awful for anyone else? Firefox on Windows fwiw


Cool. Looks like a fancier version of Dear ImGui.


Why don't apps like this leverage gpu hardware acceleration?

On my old tablet, Netflix runs a-ok, the media player has hardware acceleration and runs a-ok, but Amazon Prime stutters.

I haven't checked it in years, so maybe it is better now, but I am skeptical this is the case, because if the original creators of the Android app didn't think hardware accelerated video might be a nice idea, why would it be different now? This is a structural issue.


There is a cross sectional problem between what codecs that specific hardware supports (and generally is supported by all the popular devices), how much bandwidth you have on your connection and how many different codecs the various streaming services can support and cache locally.

There are trade offs to be made.


Fair, but in this case I think the tablet should be usable. Sure it's old and only a dual-core cpu, but checking the list of hardware accelerated qualcomm decoder codecs it supports, it lists: vc1, divx, divx311, divx4, avc, mpeg2, mpeg4, h263.

Surely AVC which is h.264 should still be supported..

Maybe it gets tripped up trying to use the software google decoder codecs instead?

That said, I am not able to test it anymore. The tablet is on kitkat and I'm not able today to find a site hosting a minsdk older than lollipop. It's not a big deal, but it's disappointing to see apks for services that used to work just disappear. They always gobble on about "security". Why is it a problem for everyone else except Netflix? Netflix loves supporting devices.


"Supports AVC" doesn't mean a great deal on its own. H264 has a number of different profiles (over 20, although only 3 are commonly used), and 20 different levels. Most of the potential combinations aren't relevant, but supporting the full range of hardware decoders while also sending something reasonably close to the best quality that the device supports can mean encoding ten different versions.

If it's a particularly low-end/old device, Amazon may have simply decided it's no longer worth encoding a version supported by your device's hardware encoder, while Netflix still does.


Can you recommend any resources for learning about video codecs?


Not sure if this is what you’re looking for, but this popped up a while back: https://blog.tempus-ex.com/hello-video-codec/


This is vaguely okay but makes the common mistake of overemphasizing that codecs use an IDCT. That's just a component of intra-prediction, it isn't necessarily important or used, and in H.264+ it isn't even a real IDCT when it is used. It's a simpler transform that isn't mathematically accurate but is bit-exact, which is more important.


This is perfect! Thank you!


doom9 has a lot of good discussion.


I remember back in the day when doom9 was blocked at the corp firewall level because it had posts on how to circumvent things. Nevermind it was the unofficial support for things like AVISynth, x264, etc. Even the nascent days of ffmpeg were there. It took a lot of cajoling and back and forth with legal depts, but eventually, was allowed to be granted access on an individual basis

I owe a lot of what was accomplished to folks like DG and the other troopers from the bad ol' days of VFW based apps. It was like monkeys throwing rocks at the space ships compared to whats now. Remember when 2 cores was fast?

For the person up thread talking about learning things, I wish I was in that position today with them rather than 20 odd years ago where everyone was in the dark, and thank the heavens for sites like doom9 that shone lights in dark places!


>For the person up thread talking about learning things, I wish I was in that position today with them rather than 20 odd years ago where everyone was in the dark

Where would you start today?

And what was being circumvented? Copyright protection?


DeCSS was considered a very big no no if you worked for a company that did work for content owners. Doom9 had lots of info on how to use DeCSS, and so it was flagged by lots of corps.

Where would I start today? That depends on your level of interest, and what you want to do. Are you wanting to learn things to do to stay hip and cool making videos for the socials, or are you wanting to do technical things to video for less prestigious ends? That's two different paths that intertwine, but different recommendations depending.

That's not a rhetorical question, btw. I've been in both fields, and continue to work in both. There's lots of reading, and today watching videos, but the biggest thing will be the access to actually doing the things that weren't possible so freely 20 years ago. Try things, make mistakes, learn, do again.


I am interested in using video compression to improve delivery of VR and AR content over the web. But, this can mean a lot of different things! I know there is a lot of opportunity for this, I have a lot of ideas, but my knowledge of video compression and video streaming is limited. There are some companies working on this, but it’s not a singular problem or solution. I expect it will end up being many categories of solutions.

I usually like to learn about things from first principles so that I can then choose what to intentionally skip over, if that makes sense. I find the history is usually a good place to start, but it only goes so far with these more blackbox type topics like video compression.


There's a bunch of details that can be suggested, but the sad (actually, amazing) thing is that compression and encoding software has come such a long long way, that most of the default settings will produce a very decent result. FFMPEG + x264/x265 are stunning with minimum inputs required (minimum compared to the many many switches you can fiddle with individually).

For a broader understanding, look into the basics of how codecs like h.264/h.265 work. Learn about I frame encoding and how P and B frames work. Learn about the affects of long-GOP (multiple seconds) encodes for streaming purposes vs shorter GOPs (half second). Understand how an encoder makes it's vector decisions. How/why static videos encode compared to fast action. How will the videos you intend to use for VR/AR fall into those categories.

Resources like articles on Wikipedia have lots of information on the technical side of things without that are not video compression tutorials. However, learning the tech side will help make informed decisions on what knobs to tweak and why. There are so so many actual blogs/tutorials/SO answers/etc that will give you actual settings to apply. Once you have the specific question, I will be amazed if there is no answer available for it online.


It's all made of inter prediction + intra prediction + residual coding.

There's an evolution from JPEG -> MPEG2 -> MPEG4 part 2 (XviD) -> AVC -> HEVC… where it gets more complicated over time but that's still the structure.

There's also alternative ideas like wavelets (JPEG2000) and many other minor silly ideas; almost all of them are bad and can be ignored. Which is not to say the MPEG codecs are perfect.

ML people think they can do "ML video compression" which will be a black box; I think this might work but only because calling your video codec ML means you can make your codec 1TB instead of a few hundred KB.


All noted. I’ll look up the items I don’t know. Thanks.

And yeah, I didn’t mean blackbox like ML. I meant like H.264/H.265. Blackbox isn’t the right term.


There are many writeup/blogs about people's experience using these codecs, and typically, I roll my eyes at just another releasing the same info. However, for someone like you, being one of the lucky 10000 TIL types, you're their target audience.

The best way to learn is to just keep encoding. Take a source file and make an output that looks good. Compare things like its filesize, macroblocking, mosquito noise levels, etc. There are PSNR and other similar type tests that will compare your output back to the original. See if you can then tweak any of the settings to improve the PSNR score without increasing the bit rate. Then, keep decreasing the bitrate to see what things you can get away with before it becomes unacceptable. You can spend a lot of time going frame by frame comparisons, but remember, 99.99999% of viewers will ever only see it in full speed playback, so don't forget to take that into consideration. Look for obvious banding from gradients. Does a 10bit encode vs 8bit improve that? Is it worth the limits from some players not being able to use 10bit? How does frame size vs bitrate affect the quality of your file?

Doing enough of these tests, you'll start to get specific questions. Those will have more easily found answers.


Now we’re talkin! Thank you!


Didn't everyone agree to align on AV1? So that should become a non issue as its adoption progresses.


What about all the older CPUs and GPUs out there that don't have hardware support for AV1?


Or even new ones, like the AMD Radeon RX 6500 XT that had the AV1 decoder chopped off the Navi die for some inexplicable reason.


The reason is because it's harvested from a laptop chip; and in laptops, the Ryzen 5000 media engine features AV1 decode, so it's wasted die space.

Still, the 6500 XT is an awful GPU.


>the Ryzen 5000 media engine features AV1 decode

No it doesn't.[1] Ryzen 5000's iGPU is based on the older Vega architecture and has no AV1 support, so everyone like me who just bought a brand speaking new laptop with Ryzen 5000 will be screwed soon enough.

[1] https://cpufinder.com/amd-ryzen-7-5800u


Yeah, you need to wait until 6000 series. AMD really dragged their feet with APUs.


That's why I said as it's being adopted. Right now it's just the very beginning of this phase (for instance AMD just started providing hardware decode in RDNA2).

So I expect this to become a non issue at some point.


As long as they don't take it as an excuse to reduce bitrates further. It's getting pretty bad these days with macro-blocking everywhere. This practice really took off when covid started. I actually cancelled Netflix because of it. Amazon is not as bad for macroblocks, but I have run into a few shows that show macroblocks, such as Mr. Robot.


>As long as they don't take it as an excuse to reduce bitrates further.

I actually think it is easier ( comparatively speaking ) to push for improved Networking so someday bitrate becomes less of a concern. Just like what happened with Audio. Even with the state of the art VVC Encoder, you only get about 65% ( or 2.8x ) reduction in bitrate in most common cases compared to x264. 65% reduction in 20 years isn't exactly a lot. We have easily got 20x bandwidth reduction in cost in the past 20 years.

Netflix are testing with 800Gbps per box now. May be when PCI-E 5.0 is available along with higher memory bandwidth they could try 1.6Tbps per box within the this decade.


Lots of people have plenty of bandwidth. I certainly do. It's the service providers who refuse to send it. It's pretty pathetic, because if you obtain a 4k to 1080p re-encode from a third party, you will get a better picture quality than what the services send directly as 1080. They will not send a 4k bitrate stream to a lower resolution display, so it's necessary to patronize third parties.

It's only since covid that they latched onto the excuse propagated by idiots out there that the internet couldn't handle people watching video from home, and immediately jumped on it as an excuse to cut quality to save a penny. Executive bonuses all around. Clap clap.


AV1 fixes the "square blocks in dark scenes" artifacts. It just doesn't have them. So that's good news.


>Didn't everyone agree to align on AV1?

Not that I am aware of. I have more faith in that with AV2.


Judging by the list of backers, it looks like everyone who matters anyway.


> Prime Video

> xyz [4K/UHD]

4K is not available when using a computer

> HD is not available because you're not using Windows

when using Windows

> HD is not available because you don't have HDCP

Graphics driver begs to differ.

Prime SD is usually somewhere between 240p and 320p, HD looks like a decently encoded 720p file. Never seen it but I'm guessing 4K might actually approach the quality of a 2007 Blu-Ray.


Prime 4K's bit-rate is ~15Mb/s. A UHD Blu-Ray has a bit-rate that's more like 70Mb/s, and a 1080p Blu-ray has a bit-rate of ~28Mb/s. Of course, 4K is usually encoded with H.265 while 1080p is encoded with H.264, so the raw bit-rate numbers are not as useful for comparison - but still, that's almost twice the bit-rate for a quarter of the pixels, and H.265 isn't magic.


Prime 4K is nice, but if you have a decent sound system with Atmos, then Amazon Prime is not the place.

Apple TV has been the best platform so far for 4K and Atmos content, if you also want an option to purchase a title.

https://www.highdefdigest.com/news/show/everything-on-amazon...


If you have a fancy 4K TV and a decent sound system with Atmos which you want to take advantage of then you really should be buying physical media instead of streaming... I mean, if you went beyond having a sound bar, and ran wires for surround speakers with a proper home theater amplifier.

Streaming platform usually only give you 5.1 on EAC3 (DD+), on UHD Blu-rays you get 7.1 over TrueHD, and then you get Atmos on top of both of those, and Atmos over TrueHD is more Atmosy or something (besides obviously TrueHD being lossless while EAC3 is lossy.)


Very true, but frankly it's a hassle to deal with discs in this digital age.

Some soundbars such as Samsung Q950T/Q950A send the sound to all speakers either 9.1.4 or 11.1.4 respectively.

But you are quite right that discs offer the best quality.


Here in Mexico at least, there's almost no 4k content on Prime. Even less Atmos content.

Netflix, Disney, and Apple are great.

HboMax has 4k content, but their tech is really bad and the quality is constantly jumping between SD to 720p to 1080p to 4k. We have a 500Mbps fiber connection.


I saw these errors the one time I tried Prime Video and laughed out loud, and ended the trial on the spot. It was like being turned away by a bouncer from a awful looking bar because they didn't like your shoes. Thus ends our relationship!


Their android app on a tablet is even worse video quality than on a desktop.


Prime video has unbelievably bad performance on LG WebOS, and it’s not the platform because Hulu, Netflix, Disney, YouTube are all fine. If this is new and makes their app work acceptably, then great!

Just checked and Prime using 90MB in my TV, while YouTube uses 214KB, so maybe I already have the wasm monstrosity.


While app the performance is just poor on my 2019 LG OLED TV I find HDR content to be unwatchable on it, seemingly a long running problem across multiple platforms: https://www.amazonforum.com/s/question/0D54P00007HLPkoSAH/le...

HDR content is somewhat better when played through the Prime Video PS5 app, though it still seems much darker than HDR content played on the Netflix/AppleTV/HBO LGTV apps.


It's not just HDR content. I was watching a 2008 BBC drama called Little Dorrit on a Roku in 1080p streaming, no HDR because that didn't exist when it was filmed. It was super smeary, had terrible blacks, tons of obvious banding artifacts, it was really bad.

I bought the 1080p Blu-Ray version from the UK, and even my parents standing 10 feet away from our 48" 1080p TV (without me explaining the problems with the Prime version) could tell the difference after about 10 seconds.


Could possibly be an issue with the HDMI Black Level setting on the Roku's input.


It was a Roku TV, so the Roku video signal was built-in and not passed over HDMI.


Oh man I'm kind of happy to know that other people have this HDR problem too. Why don't apps and TVs just provide a simple toggle to turn it off? Why must it be automatic and non user selectable? Just give me the normal color stream!


Prime might maintain more of an asset cache than YT does.


Maybe it shouldn't?

I feel like the problems with their app are more algorithmic than in the implementation details. I once had to go edit my watchlist on amazon.com because I had added enough movies that whatever super-linear algorithm they were using simply could not comprehend my watchlist and the input watchdog timer would murder the app. And come to think of it that was when I used Prime on a Roku, so it's a cross-platform problem (yay, WASM!).


I found most LG WebOS apps to have issues of some kind. A quick Google search tells me that they finally now have an HBO Max app, but that was a major issue for the longest time for me. I got a Google stick instead.


haven't noticed any performance issues on mine (edit: relative to other apps in that ecosystem) - tho is a newer model. the UHD for in-house productions is pretty darn atrocious tho (looking at you, goliath)

edit: HDR, not UHD...tho have only noticed it predominate on their originals vs purchases.


I have an LG UHD TV model from 2020 running webOS and I haven't noticed any performance issues either with Prime Video, UHD, HDR or otherwise. I use Bluetooth speakers/earphones for audio. The sound is almost perfectly synced with the video in Prime while there is a noticeably annoying delay in the Netflix app on the same TV.


To be clear, my beef is with the browsing UI, not the playback. The playback on all these devices is handled by dedicated hardware. It's not like Amazon's app is decoding videos in a WASM VM.


Well, X-Ray is a feature in the Prime app that I like and use quite often and is very snappy. Since it's part of the UI overlay, I think it's handled by the WASM module (probably with local caching) and not by the dedicated video-decoding hardware in the TV.

I agree that the browsing UI can sometimes take an extra second or two to refresh which I don't find to be a dealbreaker since it's the video/audio playback that I care about the most.


And yet they can't even implement Picture-in-Picture mode on their Android app. A basic feature has been stable for years among any app that plays videos.

Not to mention that their show recommendations are the worst ever, I would be ashamed to be part of their ML team honestly. (For reference I'm an American/North African woman staying in France and all of my recommendations are for Indian action movies, literally all of them. I've never seen a single Bollywood film ever.)

Anyway this is to say, technical innovation won't undo the damage done by shitty product managers.


Personally, to me, whenever I read these stories about how Amazon is doing this novel use of WebAssembly, or how Uber is doing ludicrous engineering effort to keep their React-based app under 300MB for the App Store, I can't help but think:

"Man, that's an awful lot of work to avoid writing a native app."


The challenge would be that Prime Video is distributed on many client devices. Here’s a list of a few TV OSes: https://en.m.wikipedia.org/wiki/List_of_smart_TV_platforms. How would you manage a different native app for each of them?

Caveat: previously worked at Prime Video though not in this area, still at Amazon


I could be incorrect, but last time I did some reverse-engineering, Netflix had a Qt application that used native C++ bindings to knock out huge sections of that list.

That Qt application was only lightly modified for each platform, IIRC, and it appeared all over the place from those cheap Linux-based Blu-ray players to Smart TV clients. Anywhere that was embedded, generally Linux-based, and unlikely to get frequent updates.

As for some of the larger players like Roku or Android, those ones are more obvious where to hire. Java developers and BrightScript developers.


It all kind of undermines your original comment tho that they're trying to avoid native. It seems more likely that they're goal is to reach as many devices as they can and be able to do updates frequently. In the first paragraph of the linked article the guy says the trade-off is between "updatability and performance".


I am really sick of fast-churn and forced updates. Guys. Settle. Write a native app or three. It’ll be ok.


As someone who is writing a cross platform app, I'd love to, but I can't.

There are too many different stacks with their own languages (which means different library ecosystems), no good FFI/IPC options that work across stacks, massive test matrix for every little piece of the code etc. All of this, when the native differences that supposedly should make each platform unique are so few, and virtually all UI and application logic is identical. Our platform lords have basically not standardized a single thing.


> Our platform lords have basically not standardized a single thing.

And in addition, you have bugs in official native SDKs and then you end up writing your own text rendering engine, because some OS-level API didn't work properly.


I think it's important not to conflate churn with fast updates. In it's essence the ability to deliver updates quickly (in this case by downloading a blob of wasm+js each time the app is started) is a great thing imo, as you eliminate a huge chunk of problems around delivering updates and clients on old versions. For 99% of users this is a huge win.

You can obviously abuse this by redesigning the UI every 6 weeks but let's not throw the baby out with the bath water.


I'm of the strong opinion that there simply does not exist, as you say, 'a huge chunk of problems around delivering updates and clients on old versions.' It's certainly not a win for users, but for lazy developers. Good software engineers are much more than fast-churn developers. In fact, they are different people.


I'm 100% with you on this, but good software engineers are also much more expensive than fast-churn developers, and it seems to come down to how much spend these companies want to invest in their technology. There is just an overabundance of web developers out there, and the tech companies are taking advantage of that.


I guess we'll have to agree to disagree. I just don't see barriers in getting updates out to people as a good thing.

It seems your argument is that it encourages developers to be sloppy, but in practice I think it just ends up as a net loss for users as the average user just isn't interested in updating software, however critical the bugs in their current version. Maybe I'm different from the average user, but for me the fact that chrome has gone through 90+ versions over 10+ years without me having to think about it is great (and, I suspect, strongly correlated to the relative lack of security issues over the years despite it having a huge surface area).


if they did that then only the most popular platforms would ever be support, happen to use a niche product, or a older product, well your SOL.


Uber is a bad example because they need to have support for literally every payment method on the planet. And all of that needs to be preloaded ahead of time just in case the user decides they're going to fly to Mumbai tomorrow. Their app absolutely needs to be bloated or it won't work at all.

Likewise, Amazon needs to run on all sorts of different smart TV platforms, not just iOS and Android. All of those apps need to be consistent to each other, rather than to their host platforms. That's almost the textbook case for using a cross-platform framework.


> Their app absolutely needs to be bloated or it won't work at all.

It really doesn’t. I remember this being brought up back when the Uber engineer crying about how their compiler couldn’t match their scale was making the rounds, and in the end the numbers just did not add up to support the app size they have right now.

> All of those apps need to be consistent to each other, rather than to their host platforms.

Why? Platform-specific apps, for the most part, should feel at home on their platforms. Not doing this is how you get YouTube for Apple TV and similarly bad apps.


> Uber is a bad example because they need to have support for literally every payment method on the planet.

How is the breadth of their payment back-end and excuse for a bloated client?


>And all of that needs to be preloaded ahead of time just in case the user decides they're going to fly to Mumbai tomorrow.

This is Interesting. So essentially Uber is a "Global" App by default?


Yes, that's why it works everywhere* with local specifics ( e.g. proposing tuk-tuks where available, or bike rentals where Uber have the service or a partner, etc.) directly without new downloads or anything.

* of course where Uber are on the local market, i doubt it works in North Korea


Writing a native app for every conceivable platform someone might want to watch Prime Video on seems an awful lot of work


"But why, though?" I keep asking myself, from a "how many darn buttons and doodads does it really need to have?" perspective. I suspect the issue is that people write entirely too much dang code to solve what is, after all, a fairly simple UI problem [1]. I mean, seriously. The Alto had less than a megabyte of RAM and managed to invent the GUI--menus, mouse, all. How have we managed to make this harder in 40 years?

[1]...until Branding(tm) enters the chat.


Because it's not just buttons and doodads, it's all the error handling and loading indicators and UI edge cases that are needed to make it seamless and good. Title too long? Does it get truncated or shrunk? Does the UI reverse itself for RTL languages? Does the focus ring move in a sensible order?

This is front end engineering. It's a whole profession, and when it's done poorly, folks think your software is garbage. Forty years ago half of the software was in English and required a specific screen resolution. Of COURSE it's going to be harder.


Frontend engineering is a thing because of the accidental complexity of a massive software stack that's evolved by accretion and gone through fad after fad after fad. I'm not looking down on frontend engineers, but I don't believe right-to-left languages or focus order inevitably leads to hundreds of thousands of lines of code or 50+ MB binaries.


Where do you think all that code lives, then? The support for dark mode, the ability to look at the pixel data of an image so that you can display text with the right color on top of it to have high enough contrast, the subtle animation when you tab between elements in a scrollable grid, the annotations for accessibility, the code that makes drag and drop work like you expect... As much as you don't think it is "hundreds of thousands of lines of code", it really is.

I started a business online five years ago. I used as little JS as possible: the first version was probably 500 lines total. Today, it's approaching a quarter million, and it's all just me. Why? Because I want forms to show helpful errors when you type the wrong thing, and my pages to show appropriate visual cues at the right time in the right places, and for the page to look right when dark mode kicks in on your laptop or you resize your window, and for decimals to be formatted correctly for folks in Europe. Or for the power users who expect things to be fast, so navigating forward and back loads and caches data correctly. Or for folks with high-dpi displays (or as it were, folks who zoom in) who don't want pixelated icons.

And has it paid off? Yeah. As an example, my service hosts a surprising majority of blind podcasters, including the American Council for the Blind. I didn't get here with a spring in my step and a strong belief in semantic HTML, I got here by investing in all the fiddly edge cases so no matter what size your screen is, the browser you use, the language you speak, the currency you want to get paid with, your ability to use a mouse, your screen reader or level of vision, you still have a great experience. As much as you'd like to call it a fad or accidental complexity, it's really just your inexperience with actually building good user interfaces in the 21st century.


Hey, I share your passion for human interfaces - huge kudos to you. I built the main interface for the redesigned Amazon Photos iOS app and went through a lot of the things you mentioned - totally agree that it pays off. If you ever want to chat over coffee my email is johnanthony.dev@gmail.com


I share that passion too! Most people grossly underestimate how difficult it is to implement something as seemingly simple as a text field or menu, when there are so many hidden issues and techniques that make them easy to use because you don't notice all the support you're getting.

Well implemented user interfaces have polish that makes their inherent complexity invisible, but polish is actually millions of tiny little scratches, not just a clean simple perfectly flat surface.

Accessibility and internationalization are two crucial dimensions that most people forget about (especially non-sensory/motor-impaired Americans), which each add huge amounts of unavoidable complexity to text fields and the rest of the widget set.

Then there's text selection, character/word/paragraph level selection, drag-n-drop, pending delete, scrolling, scrollbar hiding, auto scroll, focus management, keyboard navigation and shortcuts, copy and paste, alternative input methods, type-ahead, etc, all which need to perfectly dovetail together (like auto-scrolling working correctly during selection and drag-n-drop, auto-scrolling triggering on a reasonably scaled timer regardless of mouse position instead of mouse movements only inside the text field, so scrolling is deterministically controllable and happens at a reasonable speed, and doesn't freeze when you stop moving the mouse or move it too far, etc).

There are so many half-assed custom text fields out there written by well intentioned people who just didn't realize the native text fields supported all those features, or weren't intimately familiar with all of the nuances and tweaks that have been hashed out over the decades (like anchoring and extending the selection, controlling and editing the selection with the keyboard, inserting and removing redundant spaces at the seams of the beginning and the end of the selection when you drag and drop text, etc).

Even when somebody achieves the straightforward task of implementing a text field that looks pixel-for-pixel equivalent to a native text field, they're usually making a promise that they can't keep, that it also operates exactly the same as a native text field.

I've seen many text fields in games (and browsers) that break type-ahead by dropping keyboard input when you type too fast, because instead of tracking device events in a queue, they're polling the current state of the keys each frame update, so when you get slow frames and stuttering (which is often, like during auto save or browser thrashing), they miss key transitions.

Most games poll the mouse buttons and positions this way too, so they break mouse-ahead by dropping mouse clicks if you make them too fast, and they perform actions at the current position of the mouse instead of its position when the click happened.

Even a beautifully designed well implemented AAA quality game like Dyson Sphere Project running on a high-end PC has this problem. After you place a power pole, you have to hold the mouse still for a moment to let the game handle the mouse down event and draw the screen a few times, before daring to move your mouse away from where you want to place the pole, otherwise the pole goes into the wrong position, away from where you clicked the mouse, and this really throws a monkey wrench into smooth fluid interaction, predictable reliability, mouse-ahead, etc.

The Xerox Star had a wonderfully well thought out and implemented text editor, which pioneered solutions to many of these issues in 1982 (including internationalization), demonstrated in this video:

Xerox Star User Interface (1982) 2 of 2

https://www.youtube.com/watch?v=ODZBL80JPqw

See Brad Myers video "All the Widgets (Fixed v2) - 1990". This was made in 1990, sponsored by the ACM CHI 1990 conference, to tell the history of widgets up until then. Previously published as: Brad A. Myers. All the Widgets. 2 hour, 15 min videotape. Technical Video Program of the SIGCHI'90 conference, Seattle, WA. April 1-4, 1990. SIGGRAPH Video Review, Issue 57. ISBN 0-89791-930-0.

https://www.youtube.com/watch?v=9qtd8Hc90Hw

Also by Brad Myers:

Taxonomies of Visual Programming (1990) [pdf] (cmu.edu)

https://news.ycombinator.com/item?id=26057530

https://www.cs.cmu.edu/~bam/papers/VLtax2-jvlc-1990.pdf

Updated version:

http://www.cs.cmu.edu/~bam/papers/chi86vltax.pdf

Brad Myers is finishing a book (tentatively titled “Pick, Click, Flick! The Story of Interaction Techniques”) which is partially a history of Interaction Techniques. Probably more than 450 pages. The initial chapter list can be seen at www.ixtbook.com. It is based on Brad’s All The Widgets video and Brief History of HCI paper, and also on his class on Interaction Techniques which he taught three times. As part of that class, Brad interviewed 15 inventors of different interaction techniques, all but one of whose video is available on-line, which also might be a useful resource.

Pick, Click, Flick! The Story of Interaction Techniques:

http://www.ixtbook.com/

https://www.cs.cmu.edu/~bam/ixtbook/#abstract

Brad Myers' Interaction Design Class:

https://www.cs.cmu.edu/~bam/uicourse/05440inter/

Here's the video and slides of the talk I gave to Brad's Interaction Techniques class about pie menus -- there's a discussion of mouse ahead, event handling, and polling around 16:30:

Video:

https://scs.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=...

Slides:

https://docs.google.com/presentation/d/1R9s4EEAwUjI_7A8GgdLY...

Pie Menus: A 30 Year Retrospective (Timeline):

https://medium.com/@donhopkins/pie-menus-936fed383ff1


Thank you so much for this treasure trove of links! This is exactly the kind of stuff I love learning about. Looking forward to going over this in my free time.


Is it safe to write your email address unobfuscated?


> don't believe right-to-left languages or focus order inevitably leads to hundreds of thousands of lines of code or 50+ MB binaries

It does. It literally does. Try writing your own operating system, or font parser, or graphics engine. The work our shitty React apps sit on top of is unimaginably complex. It would take centuries for one single person to grasp what is going on in an iOS "Hello World" app.

It's not simple, at all. Especially not for a video streaming service! I mean, my God, just being able to understand the codecs and licensing issues would take you months, let alone writing decoders that run natively on every platform. Let alone dealing with monitors and color schemes and HDMI and Chromecast streaming and window resizing and bandwidth limits and buffering and....

It's not simple.

EDIT: For reference, vim, a Hacker News favorite and widely considered one of the most popular TUIs ever built, is currently sitting at about 1.2 million lines of code. All it does is edit text files. Imagine if it had to play video.


> For reference, vim

Vim is an incredibly useful tool but I've seen frequent complaints about its codebase. Competitors with significantly higher code quality exist (ex Kakoune).

> > don't believe ... inevitably leads to

> It does. It literally does. Try writing your own operating system, or font parser, or graphics engine.

There are working examples of such and the code appears to be significantly simpler than the status quo. I'm far from an expert here but to the best of my understanding feature creep combined with maintaining backwards compatibility is to blame for a significant amount of current complexity.

Consider that if you rewrite a low level API with the benefit of hindsight, everything that uses that API has to be updated. Often multiple distinct APIs will be involved though, not just one. Look at the difficulty the Linux ecosystem has had gaining full support for Wayland, which necessitated the likes of PipeWire and a number of other new ways of doing things, and has been "nearly ready" for production for how many years now?


> There are working examples of such and the code appears to be significantly simpler than the status quo

Status quo is bloated -> someone rewrites a simple replacement -> becomes popular -> "Can you cover this reasonable use case, it's not currently supported" -> repeated previous step several hundred times -> oh fuck, the "lightweight" rewrite has become the bloated status quo -> GOTO step one


In some sense, this is WAI. Hopefully along the cycle, people learn to build better architectures that are modular, scriptable, customizable, and not giant monolithic piles. If things were more scriptable, then I as a user could script them how I want creating of a wall of my own "UI" and then swapping out the backends as I see fit. In my 25 years or so programming, I wish I had done this more; scripts for everything. Learn how to use my own scripts and only adjust them when stuff breaks/is swapped out. In retrospect, I have been buffeted by the whims of UI designers for decades, and I'm kinda mad about it.


> There are working examples of such

For whom do they work for? The author and maybe a small group of Internet hobbyists? When you have to support 3 billion daily active users like iOS or Windows does, that lightweight codebase isn't so lightweight anymore.

And sure, Prime Video isn't Windows, but it still has over 200 million international subscribers that it needs to handle. It's definitely not something you can write in an afternoon.


> It would take centuries for one single person to grasp what is going on in an iOS "Hello World" app.

Not really. Several years, maybe.


I have been developing iOS apps since 2014, and was active in the jailbreaking scene before then. I don't believe any one person understands the entirety of what is going on when they launch an app. Remember, iOS is based on macOS, which is based on NeXTSTEP, last released in 1995!

iOS has been plagued by a whole host of bugs from the 90s and below. Remember effective Power? (https://apple.stackexchange.com/a/189064/364414) A Unicode memory allocation bug, the code for which was probably written decades ago and intended to run on a CRT monitor. I find it hard to believe anyone at Apple had interacted with that code since the initial iPhone release. Probably nobody at Apple in 2015 knew it existed. It's so unfathomably complicated.


I started working on iOS apps around that time as well, soon after I started programming. If you put your mind to it tracing the startup process of an iOS app is not actually as difficult as you think it is, especially if you skim things like “how does RunningBoard manage my app’s lifecycle?” or “is this launch being logged by CoreDuet?”. Also, FWIW, Core Text has several engineers working on it–text rendering is not a solved problem in the slightest–and is public API depended on directly by many apps.


Alternatively, it could be that building high quality cross-platform front-ends that look and behave consistently across those different platforms is actually complex and the tools to solve those problems reflect that complexity. In my view, this is pretty common knowledge... just consider, every example of native software is horrible outside of one or two platforms, and most native software project/products don't even support more than two platforms.


This comes off incredibly naive of FE engineering.

Buttons and Doodads? sigh.

A significant factor as to why Apple has succeeded the past 2 decades is due to the design of their products - from the sleek aluminum bodies of their hardware to the UI/UX of their operating systems.

Unless you want your UI to look like something out of 1995, you're going to have to make it look good.

Making it look good will require a decent amount of code.

From animation libraries like Framer Motion, or data visualizations using d3, or even After Effects renders from Lottie.

That isn't even mentioning the amount of state that is required to be stored so that proper renders could occur - If your user has logged in, if your user has typed something into an input field, how many times you should retry a request if it fails, what functions to run due to a websocket response, and myriads of other things that FE engineers have to deal with.

Please reconsider your thoughts.


I feel the conversation will just cycle if we try to push people to the poles of opinion and insult them. Accidental complexity is an enormous, pervasive problem across all software stacks. The ratio of lines of code to pixels on the screen is pretty bonkers. Since you mention Apple, if you look at the ratio of lines of code per pixel, it's really hard to understand how an OS weighs in at 10GB, even for Apple which is brazen about deprecating and only catering to their own hardware and products. That speaks to a universal bloat problem that shows no signs of slowing down. I mean we could dive down into the details and keep coming up with special pleading for every little functionality, but it won't be a productive conversation, really, and the big picture will be completely lost.

> Please reconsider your thoughts.

Just leave that part out next time. Anyone can write this at the end of any comment and it only injects a bad vibe into the whole conversation.


Your whole line of comments is equivalent to "No offense, but.." -> proceeds to offend.

Especially with your "buttons and doodads" comment.

I wasn't even trying to get personal, but If you're going to hit, be prepared to take a hit.

------------

Never once has the weight of my OS been a concern for me, and I'm going to make the assumption that is the general case for the majority of users.

Never once have I heard the metric of LOC/pixels.

You say it's bloat. I say its necessity.

It's just keeping up with the times - beautifying and manicuring to attract eyes.

And you failed to address my comment about all the logic that FE engineers have to deal with - which is more of the meat in FE development than what happens skin deep.


>> I feel the conversation will just cycle if we try to push people to the poles of opinion and insult them.

> Your whole line of comments is equivalent to

You're doing exactly what I won't respond to, which is pushing me towards an extremist position I did not, in fact, state. I'll point you to the guidelines ( https://news.ycombinator.com/newsguidelines.html), which states:

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

And I won't be responding further.


I, for one, would love if UIs looked like it's 1995 again. Imagine the crispness, everything is fast, responsive and written in sensible languages. A simpler, and better time.


I don't know you, but from what i recall Windows 95 and 98 were anything but with fast and responsive UIs. Many windows were fixed size, and with significant input lag


I still remember Win 3.1 as being incredibly fast UI wise (once loaded), a feeling that propagated until around the Win7 era. As soon as this newer, non win32 stuff started happening in Windows, I lost that feeling. I have it again on Linux/Wayland, where native GTK controls are just such a joy to use.

And yeah - even if there was lag or slowness I don't recall, imagine those systems on today's hardware. I recently used a WinXP on fast hardware, and by god was that thing responsive, lean and fast.


While I agree in some cases, tech companies aren't marketing to those with that specific taste pallete


Yeah, it's unfortunate. No money in a small minority.


> all the error handling and loading indicators and UI edge cases that are needed to make it seamless and good

You mean the single instance of “owo we are very sorry” that shows up for any error, including the case where you might just have the gall to use the app on a non-perfect connection? The lack of a loading indicator? The poor and non-seamless experience because somebody wanted to ship Branding™?


But I still think prime video, Netflix, Apple TV, etc are still garbage. They are woefully inconsistent, laggy, etc despite it “being it’s own profession”.

I think the fact that it is its own profession is part of the problem. Every company feels the need to reinvent the wheel.


I feel like you could make tons of money proving everyone wrong.


A company that makes an awful lot of money can afford to not make an awful app on major platforms.

A native app on Windows, macOS, iOS and Android (TV), and some other solution for other platforms, isn't an unreasonable ask in this context.


If you're already very successful, why would you spend a lot more money to produce a buggier product that will generate a lot more customer support requests and cost a lot more money in engineering effort to maintain forever going forward? It just makes no business sense. This reasoning is even less justifiable in the context of WASM which is quite performant.


Because Apple has proven that people will actually pay for reinventing something but better and more polished. And you call it a buggier product, but if you have a dedicated team working on a native app for platform X, that team can become the expert on platform X, work around and fix all its caveats, and beat the competition on everything forever.

I mean granted, for a lot of companies, having a substandard or suboptimal app simply does not matter to their bottom line, because the product trumps the implementation details in the end; people are willing to put up with e.g. a bloated web app because it gives them access to a good chat service (think slack, discord). People were willing to put up with Twitter's fail whale outages whenever Justin Bieber tweeted because they had something good (network effect?).


> Because Apple has proven that people will actually pay for reinventing something but better and more polished

The massive popularity of React Native on the iOS platform suggests there's more to the story. Discord, Slack, Spotify etc might also beg to differ; all electron apps, all the most dominant players in their respective markets. Consider all the massive gaming successes on the Unity platform?

The deference to native purity is an engineering conceit not something that users actually care about except in rare cases.


I'm not following.

This assumes that, generally, native apps are more buggy. Why? This is contestable at best, or getting it the wrong way around at the worst. Perhaps at the hands of inexperienced developers it's right. But experienced developers coding in native APIs will probably produce an app with fewer bugs.

And we are talking about a billion dollar company. It can afford a handful of really good native developers for each platform. It can attract talented developers who can write cross-platform native code.

Cross-platform toolkits introduce their own class of bugs, which might require patches upstream to resolve, or annoying local forks.

As for the economics of it all, I'll leave that to the other sub-comment which deals with that with an excellent analogy to Apple. People will pay for quality.


> This assumes that, generally, native apps are more buggy.

They are, not because "native" code is inherently buggier but because the amount of code that needs to be written is multiplied by every platform you have to support to replicate the same experience per platform; more code = more bugs, and you have to handle all the nasty edge cases that are specific to each platform: a major increase in bugs is assured.

> And we are talking about a billion dollar company.

The size of the company doesn't matter, it makes no sense to massively increase the cost, complexity and staff size needed to support an existing product that's already massively successful.

> Cross-platform toolkits introduce their own class of bugs, which might require patches upstream to resolve, or annoying local forks.

This is true of literally any external source code you import into your project. However, if you re-invent the wheel you have to pay to fix it rather than having it fixed upstream for free.


And when comes the payoff for writing all those native apps?


It takes an awful lot more money to ask engineers to do boring things like, “reciprocate this behaviour from iOS to Android “


Once I broached this subject with a developer at a start-up that developed totally separate native apps for iOS and Android. I queried him about the use of native apps, and he made the point that you do whatever it takes to keep your users happy.

This was a startup. Not a billion dollar company. When your app is used by millions, it's a worthwhile investment.

It's also hardly boring to achieve the same end result on multiple platforms by using appropriate native code for each. Particularly when it produces satisfyingly fluid and responsive end results. Perhaps an engineer that considers it boring is in the wrong field. A UI developer should get satisfaction from developing UIs, not as it being a stepping stone to get into systems development. I can't think of anything more boring in app development than developing an app which operates in a mediocre way.

A carefully planned native app for each platform can still share parts of the same codebase, you're not necessarily reinventing the wheel each time.


“This was a startup.” And? They have the latitude to make big risks. Larger companies can’t eat that risk as easily.

“it's a worthwhile investment.” Obviously not, or you’d see it more.


No. You just build one abstraction and then use a code generator to build a client for different targets. So as long as you have good developers with an interest in code generation and that will stick around for a few years to maintain the generators, then it works out fine. Resist the temptation to create a new DSL or programming language. Pick a language with a strict compiler, good typing system and good reflection capabilities and you are good to go. Ideally you code your app/abstraction once and then then everything else gets generated automatically by some build pipeline.


Isn't the abstraction essentially their wasm and js libraries? And they just use a vm instead of code generator.


In their case, most likely yes. I honestly don't know and how don't know how many of their targets can be blanketed this way. The scenario I describe will not work for 8000 device types as you will end up with 8000 codegen libraries. Or that is, with my current implementation you will end up with that amount, as each target gets it's own little generator.

I recently swapped out a big chunk of my codegen code and let my code call the flutter cli, so then I generate a flutter project which in turn can generate a few clients (like web, desktop, android etc). I'm currently optimizing the way I keep state between generations: certain parts can/should be overwritten but other parts need to be kept in tact. Trickier than it sounds.

I highly recommend people to try building their own generators, scaffolding & tools. Very enriching!


In the pre-streaming era, every platform had a native player and content library app that worked fine.

If streaming services desire to replace local playback, they’ll also have to replace the convenience that provided.


In this world where the only real differences between streaming services are their apps and their catalogues, you’d think everyone would be doing everything in their power to acquire every advantage on those fronts.

My own strategy would be to keep the generic electron version and then make a native app for the top 5 most popular TV models but that’s just me.


Native apps are harder to update: https://news.ycombinator.com/item?id=30110265



For web & phones, I'm inclined to agree; a native app outperforms a web or hybrid app any day of the week. I do believe Facebook and Google are companies who ran into limitations though, binary size, number of classes / identifiers, build times, for which using web technology was a solution.

But for TVs, ehh. There's a lot more fragmentation on the TV market. And if things were slightly different, we'd have a lot more fragmentation on the mobile market as well. Actually there are plenty of mobile alternatives, but developers don't really want to support all operating systems - and these cross-platform tools often have substandard support for mobile operating systems that aren't iOS or Android.


I disagree with this perspective.

From a startup pov, going with native apps will require multiple teams (Usually just Android + iOS, but occasionally Windows as well)

React Native will only require one team, or just one engineer depending on the size of the project

Better yet, If your front end developer is fluent in React(web), then that developer is already fluent in React Native.

I completely disagree on the "awful lot of work", when you could literally get your FE developer to work on your "native" apps.

Sure, where performance must be squeezed, it's probably optimal to go native, but that bar is set a lot lower than where it actually should be.

If React Native is good enough for Discord, it's good for 3 quarters of all apps out there.


Former Dev that did Web -> Ionic -> RN -> Native iOS over the course of 6 years speaking here.

- Web devs typically don’t have strong instincts for mobile apps, they require more mentorship & onboarding time.

- Animation and interactivity is strongly limited in RN - You’re always so many levels removed from the actual APIs being used. For example, using UIKit and CoreAnimation are absolutely amazing and super lightweight on CPU/GPU in comparison. UICollectionView is a modern miracle that offers so much more control than FlatList.

- JS/Babel/node_modules is hell, no I won’t elaborate.

- Multi-threading, priority/concurrent queues etc for maintaining good all-around perf while doing intense operations is very strong with native platforms. RN just has sophisticated band-aids.

Honestly it mostly depends on the type of work the app will be doing, the dev resources available to you, and the level of quality you want to attain. I strive to create the best of the best user experience, and native is the best way to attain that.

The app I work on still uses RN for some features, but the results historically have been disappointing on average.

Happy coding!


> Web -> Ionic -> RN -> Native iOS

Although that list directly relates to the number of people you reach per line of code.

Web (all devices) -> Ionic (most devices) -> RN (Android + iOS) -> Native iOS (iOS only and probably only more recent devices).

People do cross-platform development, because it runs on so many platforms with almost no additional code. And for sure, you might not get the full experience of every platform this way, but maybe it's still good enough for users to solve their problems?!


> Web devs typically don’t have strong instincts for mobile apps, they require more mentorship & onboarding time.

I don't agree entirely. I do understand that there are differences in concerns regarding web and mobile, but the similarities overrun that of the differences, ESPECIALLY if they're working with react + rn.

mentorship + onboarding time aren't words I would choose if a developer wanted to move from React to React Native - you make it sounds like a dev is joining a new company.

> Animation and interactivity is strongly limited in RN - You’re always so many levels removed from the actual APIs being used. For example, using UIKit and CoreAnimation are absolutely amazing and super lightweight on CPU/GPU in comparison. UICollectionView is a modern miracle that offers so much more control than FlatList.

Again performance is not something I will ever disagree with, the only part I disagree with is where people typically put that bar of "we need native because of performance" - It's a lot higher than most people think.

Furthermore, your example of Flatlist in animations is confusing. A better example would probably be React Spring. Granted, RN animation is levels above, obviously, but it doesn't mean RN can't implement the same style of animations as native can

https://twitter.com/VilacaRodolfo/status/1178351034051284993

> JS/Babel/node_modules is hell, no I won’t elaborate.

ok

> Multi-threading, priority/concurrent queues etc for maintaining good all-around perf while doing intense operations is very strong with native platforms. RN just has sophisticated band-aids.

not arguing performance. that is always a given.

Again, I will reiterate.

If React Native is good enough for Discord, it's good enough for 3 quarters of mobile apps out there.

Happy coding!


HTML+JS+CSS is a very refined toolset for building UI that provides you with a content to consume. Sometimes, especially when the content is text, video and images, it’s actually easier to build high quality experience that works very well on all the platforms because it provides interaction modes out of the box that are the same everywhere.

Native apps shine when you do something novel, something that’s beyond text, images and video consumption or you leverage platform specific functions.

IMHO, sticking to Web technologies is a better option for things like Netflix or Amazon Prime but not good for Uber and alike.


You can render OpenGL with C++ and compile it to Wasm but alsk have it build and run natively (Emscripten or your own bindings or whatever). So this can be a way to do both, while keeping the reach of web. Depends on what kind of application you're building and whether this makes sense for that, for sure.


Between electron, webassembly, and other technologies, I don't think the line between web app and native app is as clear as it used to be, especially relative lift and performance. In the end, the only difference may be "one is launched in a browser and one is launched from the start menu"


It is very clear: are ALL controls which could be standard (provided by native OS toolkit) really standard (and respect system theming, customization, etc) and are ALL keyboard shortucts and other behaviors of OS standard toolkit supported? I don't care are code is native (as in compiled ahead of time / just in time / interpreted), but I'm care for behaviors and it consistency, both across one app and whole system.

I'm forced to use Slack at $job, and it is awful, for example. Some text fields looks as text fields and are used as text fields, but don't respect "Shift+End/Home/Arrows" keyboard commands. Other does. It is annoying as hell. It is Electron.

Even more «native» toolkits, like GTK and QT are have small discrepancies. Simple example: my Windows has English MUI (interface language) but Russian locale (time and date format, etc). In POSIX terms it is something like LC_MESSAGES=en_GB & LC_ALL=ru_RU (according to POSIX more specific categories have priority over LC_ALL). 95% of both QT-based and GTK-based software with translation to Russian speak Russian to me (and if I'm lucky has setting somewhere to enforce English). They think that «(system language)» is Russian. But it is not! It is not very «native», IMHO. Ok, GTK-based software is typicality build via cygwin/mingw, so it is really not very native and is «hacked» to run on Windows, but most of QT-based software IS build for Windows, as Windows is fully supported, tier 1 platform for QT. But still.

If I start to write down all small nitpicks about cross-platform software, which shows that it it not native, I could find many in any cross-platform software. Corner cases in keyboard shortcuts, non-copyable messageboxes, strange tab behaviors, non-standard open/save/print dialogs, etc, etc, etc. If you use platform for 25+ years (yes, Windows changed a lot from 3.11 for Workgroups which was my first version, but these changes were small ones, step-by-step, and A LOT of things in UX didn't change still!), you have a shitton of muscle memory and all these nitpicks throw you off.

Yes, I know, that there is no "native" toolkit for Linux. IMHO, it is a big problem, much bigger, that all problems of X11 which Wayland want to solve.


I agree, the controls and OS integration are what makes the most difference in the whole debate. But: Why aren't there native controls in the browser? Why do they suck?

> Even more «native» toolkits, like GTK and QT are have small discrepancies.

And then there is truly native stuff and you still have discrepancies like bugs that make you write your own UI kits, recently witnessed here:

https://nova.app/

> Here's a little editor story for fun. During beta we found some bugs in Apple's text layout engine that we just could not fix. Our solution? Writing our own text layout manager… from scratch.

That is a great example for Panic's dedication and at the same time a sane argument for doing cross-platform development, if you can't even trust the truly native libraries.


It’s all about deployability.


I love native apps. But on the scale of Amazon (and having to compete with Netflix), slightest bit of friction, in this case downloading and installing an app, accepting permissions, etc takes a minute or two. It would be absolutely catastrophic to their competitiveness. You need to think more broadly on scale.


If you read the original article, the use of WebAssembly is mainly across a broad array of device types where it will be packaged into an app. That app will require installation, accepting permissions, and all of that just like a native app.

The point is moot except for anyone watching Prime Video in the web browser.


The WebAssembly component gets pulled on app startup. That means you make a bug fix, deploy a new build to an S3 bucket and every single user is running the fix inside of ~24 hours.


I hope native apps die. Most companies won't put in the level of effort into their engineering that the Chrome/Firefox teams do - specifically, they're not going to take security as seriously.


Agreed! Why write a native app when you can just use Babel to transpile to Google Sheets formulas? Sure it doesn’t support threading but modern devices are so fast nowadays it doesn’t even matter.


That… escalated quickly


Now I know why Prime video app sucks on every iOS device I tried it on, including Apple TV


I've never had a problem with the functionality of Prime Video across any apple/webOS devices.

The UI, however, is easily the worst of any streaming platform. Why is a search result returned for each individual season of a TV series? Why doesn't it follow ANY of the Apple TV UI conventions?

But somehow it's more reliable than Netflix, Hulu, and HBO Max (hbo isn't that high of a bar though).


Don't forget that every single button in the UI lags.

Start an episode, pause immediately because you just realized you forgot something in the kitchen? Oops now you disabled subtitles because that UI button was still busy popping up (and subtitles are necessary to understand their whispers without being DEAFENED BY LOUD SCENES without adjusting volume by 50% every 2 minutes).

Want to hide the UI because you just finished navigating the menus to re-enable those subtitles and hit the back button? Oops now you go back to the main overview because the first click did register, it just took a minute for the TV to respond and now your second click takes you back. Now have fun scrolling down five rows three columns to find the thing you were watching 1 second ago and do this dance again.

This is not helped by LG shipping a CPU literally slower than a raspberry pi's in a €550 2019 TV, but other apps do not have this problem. It's too bad Prime has some exclusives like The Expanse, I look forward to finishing the couple series we had in the queue and cancelling Prime again.


Disney+ uses WASM and feels like it has less "suck". Prime just seems to lack some UX polish.


I really wouldn't use Amazon Prime as a shining example of WA.


why not? so they aren't using webassembly correctly? who is a better example and why?


I use it on web/AppleTV/FireTV & IMO it doesn't have a great user experience on any platform. Generally it's a bit jerky/laggy & also often gets glitchy (on FireTV at least) post very long running videos. I'm not saying it sucks, but the apps for Disney & Netlix feel so much better to use. My comment wasn't directed at WebAssembly.


I’ve been impressed with the Disney app. They had a rocky start but now it’s excellent. Definitely better than Amazon’s in my experience.


It is painfully slow comparing with its competitors, and for no good reason to be so.

Overall it has the shittiest experience, also thanks to the monstrosity that is X-ray.

At the best, this article can only say Prime Video would be more shitty without WASM maybe?


Really interesting to see the adoption. Disney+ Shared the following: https://medium.com/disney-streaming/introducing-the-disney-a...


On systems such as iOS which you don't have JIT, performance is going to be real bad, as if you are dancing with a pair of iron made shoes. Honestly Amazon should afford going full native, at least for the scene and animation stuff.


What was the reason for web assembly versus just running rust directly on the target hardware? Is the idea that the constrained WASM environment requires less QA than if the hardware was targeted specifically by the rust compiler?


It's an alternative that gives them the updatability of JS. When they describe their original (JS/C++) architecture they mention this as a motivating factor:

> This architecture split allows us to deliver new features and bug fixes without having to go through the very slow process of updating the C++ layer. The downloadable code is delivered through a fully automated continuous integration and delivery pipeline that can release updates as often as every few hours

They do support a lot of device types, which means a lot of compilation to produce many different binaries and probably more reliance on the app update systems of third parties.


>probably more reliance on the app update systems of third parties.

Exactly right. Big platforms with guaranteed support is one thing, but here's the situation for smart TVs right now:

It's 2022. You build an app for SuperScreen's latest range of smart TV. You update it every so often, and SuperScreen are happy to certify those updates and help you push them.

It's 2023. You port your app to SuperScreen's latest range of smart TVs. You keep updating your 2022 app, but SuperScreen are slightly less helpful now, as most of their attention is focused on the 2023 set (and upcoming 2024 range).

It's 2024. You port your app to SuperScreen's latest range of smart TVs. But when you want to update your 2022 app, SuperScreen say "hmm, we don't really have a lot of time now to certify apps for these older devices, please try to make as few updates as possible". It's a little tough but you have to do it, as the alternative is that 2022 TVs don't get any of your new features.

It's 2025. You port your app to SuperScreen's latest range of smart TVs. But now SuperScreen refuse to certify updates to your 2022 app saying "these are legacy devices and we can't justify the time or effort to certify apps on this platform any longer... unless you were to pay us a LOT of money".

You might get approval for one last update, paying SuperScreen the money, and then you effectively mothball your 2022 app. It becomes a "legacy" app for you as well, stuck in maintenance mode, not getting any updates or new features. And this is on a TV that is only four years old. So the best way forward is to make your native layer as small and light as possible, and shift all of the heavy lifting into the runtime client layer. If most of your client UI is a downloadable JS / WASM runtime, then you can keep supporting it even if the platform owners don't want to play ball any more.


Is there enough security that would prevent downloading a binary / full and running that versus downloading JS / WASM?


If you target a WASM VM you’re not tied into Rust itself which seems appealing from a risk standpoint


How so? Your source code is still in Rust so seems like you'd still be tied to it.


The source code is in Rust. But the compilation target is WASM instruction set.


Why does that matter with respect to being less tied to rust? Yes, it compiles to WASM but your engineers still have to write rust in order to make changes to the software. I mean I suppose they could edit the WASM directly since it's technically human readable, but if you're doing that why bother with rust at all?


In terms of platform support - if you target a VM you don’t have to worry about moving from Rust because another language might not support that runtime. Now any language that can work on a WASM VM is okay. Also, if the app isn’t monolithic it means supporting services can be in other languages than Rust too.


ELF, COFF, Mach-O, etc. objects can already be linked together regardless of the compiler producing them.


Just curious: which specific wasm VM implementation are they using?


They mentioned joining the Bytecode Alliance, so probably wasmtime.


It's actually Web assembly micro runtime (still from bytecode alliance but it's a vm that also supports interpreted mode)


It's nice to see a big company like Amazon using Wasm in something that actually sees the light of production (and in something that's used by so many people). I always see tech giants supporting or contributing to newer technologies like Wasm without really using them in important applications.


as of late last year - of the large tech companies - it seems that amazon and microsoft might be the biggest Rust (in prod) evangelists. they basically swiped all the mozilla rust devs that were laid off and made their own dedicated Rust teams. so far, this has been good for the ecosystem as they continue to contribute to the community but i know there's a lot of community members that feel uneasy about those moves. here's to hoping the worries amount to nothing :)


Language success is in large parts driven by what system adopts that language.

Objective-C’s success was driven by iOS.

C’s success was driven by Unix.

C++’s early success was driven in large part by 90’s GUI frameworks ( MDC, OWL, Qt, etc.

I think Wasm is shaping up to be huge in that you can safely run, high performance code across multiple operating systems/CPU’s. Of all the languages, I think Rust has the best WASM tooling, and the 2020’s may end up being the WASM/Rust decade.


On the one hand the proliferation of features such as WebGL, WebRTC, WASM, etc. greatly increases browser attack surfaces. On the other hand, it allows a single relatively constrained system to focus on for security hardening / sandboxing.

Regardless, being open standards, not obnoxiously object oriented, and better designed to interoperate with normal HTML content, the modern HTML5 ecosystem is certainly way more promising than Java applets / Flash / Silverlight.


> I think Rust has the best WASM tooling, and the 2020’s may end up being the WASM/Rust decade.

Still pales in comparison to .NET


Look to the future, not to the past.


I mean, Microsoft seems to be making it clear that the replacement for old-school ASP.NET formsy apps to be Blazor, which is C# you write that runs in the browser via WASM. Makes making SPAs easier if that’s your stack.

https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blaz...


I'm rewriting an MVC 5 app in Blazor as we speak. I'm "full-stack" but my JS skills are pretty low compared to C#, which I am extremely comfortable with. It is actually extremely fun to work with. Havin ability to inject a service directly into a page is amazing.

I'm hoping Blazor gets more popular. However, I will see how this project goes and if I hit any major road blocks, so my opinion my shift. So far it seems like a great option.


What I would really love is to see XAML-like layout in the browser. CSS is a mess I can never wrap my mind around even with the flex model. Whereas the XAML model is much simpler (a measure pass to size components, an arrange pass to lay them).


I think css is mostly a sane low level layout system (which allow you to produce whatever pixel you want, like opengl or whatever) instead of a predefined component system. It will never have those nicely designed components buildin (because it is not its goal). And the complexity price comes from the flexibility it demands.


XAML's layout components are much easier than HTML/CSS but styling XAML components is way over-complicated compared to CSS.


Thankfully there are tools like Blend.


Look to what’s being used in prod, not HN hype.


> I think Rust has the best WASM

Nope, C/C++ has, emscripten remains undefeated

You don't just put a name and expect things to skyroket

C++/Go/C# has more chance to become the standard toolkit for WASM than rust

rust problem, is people promotes more the "rust"™ rather than their project, the crabs definitely shadows them, it's unfortunate


Your comment seems to rooted in your dislike of Rust, rather than provide any technical arguments.

The Rust tooling certainly was the best, no question. As the Mozilla WASM devs have moved on the WASM tooling has stagnated, sadly, and The C++ tooling has caught up.

Languages like C# and Go are inherently worse for WASM because they require GC and have big runtimes they have to bring along.


>> Languages like C# and Go are inherently worse for WASM because they require GC and have big runtimes they have to bring along.

People are streaming GBs of content with their devices. At startup most of the social media apps or websites are downloading tens of MBs. Users don't care or notice.

I don't know the size of the C# runtime in WASM, but the Go runtime is compressed at about 0.5 MB. Downloading this at startup is unnoticeable unless you are deep in the wild. With TinyGo (https://tinygo.org/) it's less than 50 KB (don't know the exact number, I think around 30 KB). So really small.

The weight of a runtime is not a convincing argument against C# or Go.

On the other hand, GC and runtime make development of memory safe and concurrent code much easier than C/C++/Rust. Also, performance of C# and Go is so very close to C/C++/Rust, that for most use cases they are fast enough.

Bottom line: C# or Go are not *inherently* worse for WASM. You still need to develop your app. With both C# and Go you can do it with much less mental overhead compared to C/C++/Rust. You have more time and energy to focus on your problem to solve than to manually manage memory. If you really need the absolute cutting edge performance, not WASM but native would be more appropriate anyway.


Looking forward to a Rust framework than can actually compete with Blazor, nope Yew isn't it.


I can dislike Rust too, but the facts are facts. Emscripten is the leader of WASM world and lot of its code is based on C++.


Emscriptem is targeted at browsers and for that rust does have the target wasm32-unknown-emscripten

However, when targeting wasm outside browsers wasm32-wasi is usually a better option.


When targeting WASI or for standalone (no wasi, no emscripten) WebAssembly, Zig is currently the best option, IMHO.

Modules are small and fast, virtually all existing code is compatible out of the box and the standard library comes with full WASI support. And enabling runtime-specific optimizations such as SIMD is as simple as adding a compilation target flag.

While I maintain quite a few Rust crates specifically designed for WebAssembly/WASI usage, my personal experience is that Zig is often better, even from a performance perspective.

TinyGo is also amazing at producing optimized modules, and a lot of existing Go code can be compiled with it without any changes.


I already have wasm outside browsers, it is called JVM and CLR bytecode.

> More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET.

https://news.microsoft.com/2001/10/22/massive-industry-and-d...


"Emscripten" can target platform outside browsers. You have STANDALONE_WASM flag for that.


> Objective-C’s success was driven by iOS.

Well, Objective-C is from 1984 and its major success was being cloned to make a programming language called Java.


Java was a direct reaction to C++, not Objective C.

James Gosling's earlier object oriented PostScript based NeWS interpreter was a lot more like Objective C and Smalltalk than his later Java language was. (But I'm not going to mention the earlier abomination that was Gosling Emacs MockLisp. Oops!)

https://medium.com/@donhopkins/bill-joys-law-2-year-1984-mil...

>Bill Joy’s Law: 2^(Year-1984) Million Instructions per Second

>The peak computer speed doubles each year and thus is given by a simple function of time. Specifically, S = 2^(Year-1984), in which S is the peak computer speed attained during each year, expressed in MIPS. -Wikipedia, Joy’s law (computing)

>Introduction: These are some highlights from a prescient talk by Bill Joy in February of 1991.

>“It’s vintage wnj. When assessing wnj-speak, remember Eric Schmidt’s comment that Bill is almost always qualitatively right, but the time scale is sometimes wrong.” -David Hough

>C++++-=: “C++++-= is the new language that is a little more than C++ and a lot less.” -Bill Joy

>In this talk from 1991, Bill Joy predicts a new hypothetical language that he calls “C++++-=”, which adds some things to C++, and takes away some other things.

>Oak: It’s no co-incidence that in 1991, James Gosling started developing a programming language called Oak, which later evolved into Java.

>“Java is C++ without the guns, knives, and clubs.” -James Gosling

>Fortunately James had the sense to name his language after the tree growing outside his office window, instead of calling it “C++++-=”. (Bill and James also have very different tastes in text editors, too!)

>[...]


You missed these ones,

"Java Was Strongly Influenced by Objective-C ...and not C++... A while back, the following posting was made by Patrick Naughton who, along with James Gosling, was responsible for much of the design of . Objective-C is an object-oriented mutant of C used NeXTSTEP and MacOS X, and also available with gcc."

https://cs.gmu.edu/~sean/stuff/java-objc.html

"In order to supply a comprehensive and flexible object programming solution, Sun turned to NeXT and the two developed OpenStep. The idea was to have OpenStep programs calling DOE objects on Sun servers, providing a backoffice-to-frontoffice solution on Sun machines. OpenStep was not released until 1993, further delaying the project.

By the time DOE, now known as NEO, was released in 1995,[1] Sun had already moved on to Java as their next big thing. Java was now the GUI of choice for client-side applications, and Sun's OpenStep plans were quietly dropped (see Lighthouse Design). NEO was re-positioned as a Java system with the introduction of the "Joe" framework,[2] but it saw little use. Components of NEO and Joe were eventually subsumed into Enterprise JavaBeans.[3]"

https://en.wikipedia.org/wiki/Distributed_Objects_Everywhere


While Obj-C had a massive influence to java, I'd not call it clone - even Java 0.9 is a lot better language than Obj-C of 2010 (or so). The saving grace of Obj-C - it could include normal C just like that.



Swing also seems to be heavily inspired by AppKit.

Although leaving out messaging and leaving in both int and Integer are weird choices if their language was supposedly inspired by Smalltalk/ObjC.


>Although leaving out messaging and leaving in both int and Integer

Both are great choices... from performance point of view. Java is still modeled after plain C 1st and foremost. int should be the default and Integer(and Long) should be avoided. Up to java1.5, it took a manual operation (Integer.valueOf/intValue) to convert, so it was not abused as much. Marked integers have been in the works and they take a significant engineering effort for a nice to have feature.

When collection framework was introduced, it should have had primitive Maps/Lists - there was a regret (around java8 times) not including them initially - Streams attempted to amend the damage. The primitive types map directly to the hardware. Wrapped ones - they are pointer to an immutatable primitive and enjoy little optimization from the JIT.

As for swing, beans and properties - I'd say Delphi would have the most similarities.


Performance mostly.


Prime Video app sucks. Spend less time over-optimizing, more time improving usability.


It is the worst of the major streaming apps. It's very slow and has lots of loading/reloading on every piece of hardware I've used it on (Android phone, iPad, AppleTV, AndroidTV).

I imagine the other streaming apps use similar techniques to share code, so I wonder why theirs is so poor in this regard.

Outside of the major players though it gets a lot worse - Funimation's streaming app is horrendous, as is Crunchyroll's. At least Prime Video is doing better than that tier.


But Amazon is one of the largest companies in the world - Crunchyroll has an excuse. Amazon doesn’t.

My girlfriend’s smart TV runs WebOS and if we have to use Prime for some reason it’s easier to plug the laptop in via HDMI and do it via a browser. Amazon’s app seems to just be broken on WebOS there.

Not only that, but what the heck is up with the completely uninspired UI and user-hostile UX? The importance of seasons being lumped together is something a focus group should have noticed.

Amazon was a web-first company - you really have to wonder how they screwed up their user experience so, so badly.


I cannot believe how Amazon is one of the biggest company on the planet and still cannot seem to figure out UI/UX for any of their product. Amazon (the website) having an awful UX might be "by design" (i.e dark patterns) but it make no sense for Amazon Video, AWS dashboard, the kindle/device side of the Amazon website, ...


Do they even employ UI/UX designers? AWS dashboard doesn't look designed to me.


To be fair, just look what happened to Skype after Microsoft let a designer loose on it: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... They might have wanted to avoid that and just let their devs dump data into screens without a real design.

I'm not a big fan of what some designers deliver, but sitting down with maybe one or two users and seeing how they try to make sense of the mess that is AWS might have made sense.


Have you seen the AWS UI?

Or the AWS API?

I actually think this is a product of Amazon's workplace culture. The old joke about the first 90% taking 90% of the time and the last 10% takes the other 90% of the time. Amazon only invests in the first 90%, and doesn't give a shit about the rest of it because they have already won the monopoly, and their sociopathic stack ranking forced firing culture won't reward people working on polish.

In UIs, that second 90% is polish. Polish comes from low priority ticket/feature fulfillment, and if you're working on low priority tickets, that means you're getting fired in the next reaper cycle.


I'd be happy with surround (5.1) upon windows, alas it seems you can only get that via an Amazon device. HD 5.1 would suit me over any 4k and do feel many streaming services use the 4k etc as marketing more than anything else and fail upon the audio aspect in regards to surround, which IMHO add's way more to the viewing experience than a bump in resolution - though others millage may vary.


Would this architecture work on their iOS app as well even with Apple's restrictions on downloading executable code?


You can run WebAssembly in a WKWebview, which wouldn't break any Apple restriction.


The prime video website doesn't seem to go higher than 480p. YouTube trailers for their own shows look vastly better.


first time i've seen practical wasm system design and the transition written about. that being said, i would really love a deeper dive. the graphics were also insightful -- cheers, alexandru


I'd actually love to do a more in-depth presentation of the internals and how stuff works together. Who knows, maybe in a future post :)

If you have any specific question, I could try to answer it here.


Why is this in their “science” blog?


Sounds like they are doing it wrong.

Our Wasm investigations started in August 2020, when we built some prototypes to compare the performance of Wasm VMs and JavaScript VMs in simulations involving the type of work our low-level JavaScript components were doing. In those experiments, code written in Rust and compiled to Wasm was 10 to 25 times as fast as JavaScript.

For video processing, especially high fidelity, high frequency, and high resolution video, I can see WASM crushing JavaScript performance by orders of magnitude. But, that isn’t this. They are just launching an app.

I have verified in my own personal application that I can achieve superior performance and response times in a GUI compared to nearly identical interfaces provided by the OS on the desktop.

There are some caveats though.

First, rendering in the browser is offloaded to the GPU so performance improvements attributed to interfaces in the browser are largely a reflection of proper hardware configurations on new hardware. The better the hardware the better a browser interface can perform compared to a desktop equivalent and I suspect the inverse to also be true.

Second, performance improvements in the browser apply only up to a threshold. In my performance testing on my best hardware that threshold is somewhere between 30000 to 50000 nodes rendered from a single instance. I am not a hardware guy but I suspect this could be due to a combination of JavaScript being single threaded and memory allocation designed for speed in a garbage collected logical VM opposed to allocated for efficiency.

Third, the developers actually have to know what they are doing. This is the most important factor for performance and all the hardware improvements in the world won’t compensate. There are two APIs to render any interface in the browser: canvas and the DOM. Each have different limitations. The primary interface is the DOM which is more memory intense but requires far less from the CPU/GPU, so the DOM can scale to a higher quantity of nodes without breaking a sweat but without cool stuff like animation.

There are only a few ways to modify the DOM. Most performance variations come from reading the DOM. In most cases, but not all, the fastest access comes from the old static methods like getElementById, getElementsByTagName, and getElementsByClassName. Other access approaches are faster only when there is not a static method equivalent, such as querying elements by attribute.

The most common and preferred means of DOM access are querySelectors which are incredibly slow. The performance difference can be as large as 250,000x in Firefox. Modern frameworks tend to make this slower by supplying additional layers of abstraction and unnecessarily executing querySelectors with unnecessary repetition.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: