Building your own DDOS protection and CDN will involve a lot of devops bandwidth in ensuring both low latency and high availability. You may need to negotiate good rates with your ISP/VPS/Cloud for network bandwidth. It will also involve keeping in sync with security fixes and the state of the art in terms of bot protection, etc. If this use case is not a core part of your business, it will be better to bite the bullet and go ahead with a 3rd party solution like Cloudflare / AWS Cloudfront + WAF + Route 53 / Google Cloud Armor / Fastly
Openresty with a few Nginx modules and Lua scripts can go a long way for many use-cases mentioned:
- CDN: https://github.com/taythebot/lightpath . This project seems to be a WIP which you can use as a starting point for your needs. You will also need to find good enough "edge" locations for your CDN.
Similarly, HAProxy does a lot of stuff with the correct config and is also extensible using Lua:
> Trustwave is announcing the End-of-Life (EOL) of our support for ModSecurity effective July 1, 2024. We will then hand over the maintenance of ModSecurity code back to the open-source community.
> “Why do I have to re-install my OS from CD after I run your tool?”
> Good question...
Heh, ah, interfacing with build systems, I do not miss it. That's a fine reason why I hate gradle and its ilk because for every person that does "version = readFile('version.txt')" some joker is going to "dependencies = run('sudo apt-get install -y my-awesome')"
This entire slide deck is triggering, and I have so much sympathy for them
It depends a lot on the backend architecture. Number of DB requests per web request can also be high due to the pathological cases in some ORMs which can result in N+1 query problems or eagerly fetching entire object hierarchies. Such problems in application code can get brushed under the carpet due to "magical" autoscaling (be it RDS or K8s). There can also be fanout to async services/job queues which will in turn run even more DB queries.
Hey, this is not a problem for us at Nhost since most of the interfacing with Postgres is through Hasura (a GraphQL SQL-to-GraphQL) it solves the n+1 issue by compiling a performant sql statement from the gql query (it's also written in haskell, you can read more here https://hasura.io/blog/architecture-of-a-high-performance-gr...)
Autoscaling is slow. If you're using AWS autoscaling group, decisions are based on several different metrics that are typically averaged over a period. If the instance pool size is increased, that fact gets picked up by yet another event loop that runs periodically, and actually starts instances. So there are multiple chained delays before the instance is actually launched. In practice, even if your instances have extremely fast start-up and can begin processing the queue quickly, the job in the queue could be waiting 4+ minutes to get picked up, in a scale-to-zero situation. You've also got things like cooldown periods to ensure that you are not flapping.
With k8s you have more control over knobs and switches, and you don't have an instance start-up delay, but the same type of metrics and event loops are used, particularly if you're using an external metric (eg SQS queue depth) in your calculations.
Some type of predictive and/or scheduled scaling can reduce delays at the expense of potentially higher cost.
If reading Data and Reality, please read the 2nd edition [1] (the 3rd edition has a co-author who replaced half the stuff with his own "notes"). The 2nd edition is out of print but you can get a PDF here [2]
Source code is here: https://github.com/rrampage/skitter/tree/master/pong-raylib
It is fairly straightforward to get Raylib running in the browser. I used @flohofwoe's HTML shell file ( https://github.com/floooh/sokol-samples/blob/master/webpage/... ).
Compilation is something like: emcc -o target/pong.html -Wall -Wextra -std=c99 pong-raylib/pong.c lib/libraylibweb.a -I ./include -sUSE_GLFW=3 -sSINGLE_FILE -Oz --closure 1 -sFILESYSTEM=0 --shell-file pong-raylib/shell.html