Hacker Newsnew | past | comments | ask | show | jobs | submit | JamesonNetworks's commentslogin

Yeah, we live on the backside of a circle with no through traffic and today I watched a teenager race around the circle going faster than 20mph for several laps. My driveway is sloped and balls roll into the road that my small children go after all the time. Its been a point of focus for me to convince them to check the road, stop, look both ways, ALWAYS, and make sure there are no cars.

Even if they dont clock it car, its my hope the driver sees them on the side while they pause to check. Them getting hit by a car in my own neighborhood is my biggest fear and also the most likely disaster that can probably befall them.

I also love transporting them in our van, so its just a very complicated issue. I wish our populace was more into walkable solution and more attentive at driving.


30 minutes seems long. Is there a lot of data? I’ve been working on bootstrapping sqlite dbs off of lots of json data and by holding a list of values and then inserting 10k at a time with inserts, Ive found a good perf sweet spot where I can insert plenty of rows (millions) in minutes. I had to use some tricks with bloom filters and LRU caching, but can build a 6 gig db in like 20ish minutes now


It's roughly 10Gb across several CSV files.

I create a new in-mem db, run schema and then import every table in one single transaction (in my testing it showed that it doesn't matter if it's a single batch or multiple single inserts as long are they part of single transaction).

I do a single string replacement per every CSV line to handle an edge case. This results in roughly 15 million inserts per minute (give or take, depending on table length and complexity). 450k inserts per second is a magic barrier I can't break.

I then run several queries to remove unwanted data, trim orphans, add indexes, and finally run optimize and vacuum.

Here's quite recent log (on stock Ryzen 5900X):

   08:43 import
   13:30 delete non-essentials
   18:52 delete orphans
   19:23 create indexes
   19:24 optimize
   20:26 vacuum


Millions of rows in minutes sounds not ok, unless your tables have a large number of columns. A good rule is that SQLite's insertion performance should be at least 1% of sustained max write bandwidth of your disk; preferably 5%, or more. The last bulk table insert I was seeing 20%+ sustained; that came to ~900k inserts/second for an 8 column INT table (small integers).


Saying that 30 minutes seems long is like saying that 5 miles seems far.


In contrast to this point, as long as I use Xcode and do the same thing I've always done allowing it to manage provisioning and everything else, I don't have a problem. However, I want to use CI/CD. Have you seen what kind of access you have to give fastlane? It's pretty wild. And even after giving it the keys to the kingdom, it still didn't work. Integrating apple code signing with CI/CD is really hard, full of very strange error messages and incantations to make it "work".


I don't know about fastlane, since my CI/CD is just a shell script, and signing and notarising is as hard as (checking the script) running `codesign ...` followed by `notarytool submit ... --wait`

Yes, you need to put keys on the build server for the "Developer ID Application" (which is what you need to distribute apps outside of AppStore) signature to work.

You do not need to give any special access to anything else beyond that.

Anyway, it is indeed more difficult than cross-build for Darwin from linux and call it a day.


My experience with this at one company is the DB became a wild west of cowboyed sprocs that were in source control but a lot of times the sproc in the db didnt match the stored code. It became a way to skirt code reviews and push changes fast. Now, the environment was toxic to begin with, and maybe that wouldnt happen on a project with better technical leadership, but there is a lot of wiggle room for hanky panky at the db level


Upgrading to standalone components and the new signal API right now, not sure I’d say this avoids the frontend treadmill


standalone for us was piecemeal - just do it one by one line of code here or there when you are already in the component making other changes.

Likewise signals it was trivial to just change @Input() to input (ok slight simplification but not by much - I think there are automated scripts to do it anyway if you want to do it in one fell swoop?) when already in a component making changes.

But you didn't have to, which is nice. You could take your time doing it but by bit if you wanted, no rush etc. I don't think the old ways are even fully gone yet anyway?


You may not have to today, but you will one day. They will remove zone.js and there will be a whole host of deprecated libraries and outdated blog posts about how to do things the “angular” way. If the vite dev server didnt feel so much snappier, I would lament it, however, I think overall its a nice change. And sure, just change components while you are in there, but this is for my blog libraries I work on in my spare time. A lot of the standalone stuff just feels like change for change sake and the scripts did not run against a library project. I tried to dig into the @angular/cli repo to try and figure out what was going on, but after reading a few classes noped out and just converted by hand. Only takes a couple of hours or a day to test, but thats 0 productivity time. Change detection is different now and leveraging ngOnChanges is def broken now, zone js removal is experimental, and all of it so Angular becomes more like React as far as I can tell. My new projects are Django with templates and post backs. Its a breath of fresh air.


Off hand guess: reduce apparent battery life in order to nudge people to upgrading, thinking their batteries are going


that's exactly what the connect implied. as with apple having done that at least twice in the open.


Which instances would this be?


This comment thread is a microcosm of the problems with python packaging :D I appreciate the work the ecosystem does on it and everyone is doing their best, but its still a hard problem that doesn't feel solved


Check out uv if you haven’t.


Best I’ve been able to do is around $22 a month on DO, would love to hear alternatives that are cheaper


DO is quite expensive, Vultr is solid, Hetzner is too and is even cheaper.


Hetzner my lord :)


Pi 5 + Cloudflare


I run a homelab that isn't too far from this, but I wouldn't recommend it without a few caveats/warnings:

- Don't host anything media-heavy (e.g. video streaming)

- Make sure you have reasonable upload speeds (probably 10+ Mbps min)

- Even behind Cloudflare, make sure you're comfortable with the security profile of what you're putting on the public internet

The min upload speed is mostly about making sure random internet users (or bots) don't saturate your home internet link.


Oh yeah definitely don't try this unless you have fiber and your ISP isn't too twitchy.

My suggestion is mainly for static site hosting since the Pi only needs to update the cloudflare cache when you make changes, and it should be able to handle a small db and few background services if you need them.


Any guides or blogs on how to do that?


Loads, but it'll depend on what you want to do exactly. I think this should be the approximate list of things:

- domain at Cloudflare set up to cache requests (this will take the brunt of the site traffic)

- static IP at home (call your ISP)

- port forwarding on your router to the Pi for 80 and any other ports you need, maybe a vlan if you're feeling like isolating it

- a note on the Pi that says "don't unplug, critical infra"

- the same setup on the Pi as you'd do on a cloud server, ssh with key pairs, fail2ban, etc.


Where would you host this for $60 a year?


I would use hosting with SSH access. I am based in Poland so we have MyDevil.net But also you can just rent VPS for 5$, but you have to care about setting everything up.

First thing I thought while reading was Firebase - it's interesting how much it would cost there.


hetzner cpx11 in Ashburn - 150 ms latencies to Europe are totally fine for this use case. with 15k groups and 162k expenses (guesstimating 30k users, email logs per-expense, etc.) , you're not even pushing 2 gigabytes of disk space (conservatively), nor are you doing anything computationally expensive or stressful for the DB under normal load. With decent app & db design, like proper indexing, 2 vCPUs and 2gb RAM is more than enough.


I’ve just recently gotten into ansible and find myself building the same thing. I wrote a script to interact with virsh and build vms locally so I can spin up my infra at home to test and deploy to the cloud if and when I want to spend actual money.

I’m still very much an ansible noob, but if you have a repo with playbooks I’d love to poke around and learn some things! If not, no worries, I appreciate your time reading this comment!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: