"Tangent: why is it that downloading a large file is such a bad experience on the internet?"
This comment could only come from someone who never downloaded large files from the internet in the 1990s.
Feels like heaven to me downloading today.
Watching video from YouTube, Facebook, etc., if accessed via those websites running their Javascripts, usually uses the Range header. Some people refer to the "breeak up and re-assembly" as "progressive download".
Adding tangent to tangent, I recently experienced an unexpected modern counterpart of a 1990s large download: deleting about 120K emails from a GMail folder, then purging them for real by "emptying" the GMail "trash bin".
The first phase was severely asynchronous, with a popup mentioning "the next few minutes", which turned out to be hours. Manually refreshing the page showed a cringeworthy deletion rate of about 500 messages per minute.
But at least it worked; the second phase was more special, with plenty of arbitrary stopping and outright lies. After repeated purging attempts I finally got an empty bin achievement page on my phone but I found over 50K messages in the trash on my computer the next day, where every attempt to empty the trash showed a very slow progress dialog that reported completion but actually deleted only about 4K messages.
I don't expect many JavaScript card castles of the complexity of GMail message handling to be tested on large jobs; at least old FTP and web servers were designed with high load and large files in mind.
Video streaming usually uses something like DASH/HLS and is fair bit more complicated than Range headers. Notably this means that downloading the video means reversing the streaming format and glueing the segments together.
In recent times, large video files could often be downloaded in the popular browsers by changing a URL path parameter like "r=1234567" to "r=0". I have downloaded many large videos that way.
DASH is used sometimes, but not on the majority of videos I encounter. Of course this can change over time. The point is that downloading large files today, e.g., from YouTube, Facebook, etc., cf. downloading large files in the 90s where speeds were slower and interruptions were more common, has been relatively fast and easy by comparison, even though these websites might be changing how they serve these files behind the scenes and software developers gravitate toward complexity.
Commercial "streaming", e.g., ESPN, etc., might be intentionally difficult to download and might involve "reversing" and "glueing" but that is not what I'm describing.
This comment could only come from someone who never downloaded large files from the internet in the 1990s.
Feels like heaven to me downloading today.
Watching video from YouTube, Facebook, etc., if accessed via those websites running their Javascripts, usually uses the Range header. Some people refer to the "breeak up and re-assembly" as "progressive download".