Nice list. I tried at some point to analyze html using a tree-sitter grammar and generate a list of articles, index them, and be on alert every so often for new entries.
RSS feed could be generated automatically with some AI code generator (or tree-sitter query generator), and just parsing the elements of the page.
Eventually i failed, but also i didn't try hard enough.
It's an interesting difference in backgrounds, maybe? I tried to build this on OpenBSD where we don't have docker and we use 'make' instead of random shell scripts.
For what it's worth, this is from the build log:
error code 1
error path /home/holsta/3rdparty/fusion/frontend/node_modules/@sveltejs/kit
error command failed
error command sh -c node postinstall.js
error /home/holsta/3rdparty/fusion/frontend/node_modules/rollup/dist/native.js:84
error throw new Error(
error ^
error
error Error: Your current platform "openbsd" and architecture "x64" combination is not yet supported by the native Rollup build. Please use the WASM build "@rollup/wasm-node" instead.
Looks good! I am curious on why you recommend to deploy using docker, being a single binary with no external dependencies I find the deployment simple enough.
I write all my personal projects using Go and one of the things I most like is that it compiles to a binary without external dependencies.
The SQLite driver uses cgo, so we use both Ubuntu and Windows Server in CI to avoid cross-compiling. However, we still can't confirm that it's 100% ok on Windows. If any weird bugs occur on Windows, we don't have much experience or energy to deal with them.
The Docker image is based on Debian, we are more familary with it.
it's still easier to manage docker containers if they're 50 MB instead of 300MB and if the rest of the fleet is being managed via docker-(whichever) then there's something to be said about consistency. managing everything through one interface is easier than remembering all the special cases. but to each their own.
I don't even like docker, but it still doesn't sound that terrible to me. It's an option. Use docker or use the single binary, but presumably if you like docker and have it set up for other things, you'll just use that rather than rolling your own startup scripts etc.
I do something a bit similar for my own project - it's a single binary REST server, but I still package it up with dpkg-deb and deploy that to a private apt repo so I can update it easily on the servers with "apt-get update && apt-get install blah" and that fits nicely with my existing processes and I can just add the repo and dependency to my cloud-init setup. If I used docker, I'm sure I'd find his docker image the easiest path to getting it installed and updated.
Consistency is key. it's easier if you're using docker to run all the things, then docker ps shows all the things running, instead of having to check docker, and then also check this other thing over here that's different