That can still exhaust system resources on the box it's running on (file descriptors, inodes, ports, CPU/memory/bandwidth, etc) if you hit it too big.
For something like entirely static content, it's so much easier (and cheaper, all of the static hosting providers have an extremely generous free tier) to use static hosting.
And I say this as an SRE by heart who runs Kubernetes and Nomad for fun across a number of nodes at home and in various providers - my blog is on a static host. Use the appropriate solution for each task.
I used to serve low-tens-of-MB .zip files—worse than a web page and a few images or what have you—statically from Apache2 on a boring Linux server that'd qualify as potato-tier today, with traffic spikes into the hundreds of thousands per minute. Tens of thousands per minute against other endpoints gated by PHP setting a header to tell Apache2 to serve the file directly if the client authenticated correctly, and I think that one could have gone a lot higher, never really gave it a workout. Wasn't even really taxing the hardware that much for either workload.
Before that, it was on a mediocre-even-at-the-time dedicated-cores VM. That caused performance problems... because its Internet "pipe" was straw-sized, it turned out. The server itself was fine.
Web server performance has regressed amazingly badly in the world of the Cloud. Even "serious" sites have decided the performance equivalent of shitty shared-host Web hosting is a great idea and that introducing all the problems of distributed computing at the architecture level will help their moderate-traffic site work better (LOL; LMFAO), so now they need Cloudflare and such just so their "scalable" solution doesn't fall over in a light breeze.