Same here too for personal/family vaults. Have been using the bitwarden cloud offering in professional context too.
vaultwarden, or bitwarden-rs as it used to be called, have been working flawlessly for years on my side, updates always work just as expected, and it supports a lot of organizational features too.
But I felt like it was better to trust bitwarden’s cloud for professional stuff, just for the reliability.
I feel mostly the same as the author. The Go ecosystem is so simple, logical and smooth that it is hard to reach out for something else. I do use other languages for one-off programs of course, be it bash, perl or javascript, depending on the task.
On bigger projects, the first pain point to appear for me is dependency management. It feels so antiquated in most other ecosystems, with loose compatibility contracts that add mental overhead. Go let’s you focus on the problem you are trying to solve, and you get so used to that luxury that using anything else quickly becomes painful.
if you want to limit the number of Ps, then you use a cpuset, that sched_getaffinity will take into account. cgroups only allows you to limit cpu usage, but not lower the number of cpu cores the code can run on. This is “how many” versus “how much”, and GOMAXPROCS only relates to the “how many” part.
I may have misunderstood the rationale here, but I think the discussion about cgroup support is not about limiting the number of Ps
What people want is that, if cgroup limits prevent a container from using more than M/N of CPU time (N the number of cores), then GOMAXPROCS defaults to M. Ditto other managed language runtimes and their equivalent parameters.
However, as far as I can tell, there's no clear way to figure out what M is, in the general case.
Again, I might be wrong as I did not use this directly in a couple years, but saying “the limit is 50% share of 10 cores” is not equivalent to “the limit is 5 cores”. This is still “how much” versus “how many”, and cannot translate into each other without sacrificing flexibility
GOMAXPROCS sets the number of live system threads used to run goroutines. The distinction between 50% of time on 10 cores and 100% of time on 5 cores doesn't really matter here: the recommendation is to set GOMAXPROCS=5 in both cases.
I think your comment was once completely correct, but there is now also a “cpuset” cgroup in addition to the classic cpu setting. The cpuset control gives something equivalent to sched_setaffinity but stronger since the client processes can’t unset parts of the mask or override it IIRC.
No it’s an interesting comment. This is not really about load, but about control flow: if goroutine is just spinning wild without going through any function prologue, it won’t even be aware of the synchronous preemption request. Asynchronous preemption (signal-based) is mainly (I say “mainly” because I am not sure I can say “only”) for this kind of situation.
I don’t have the link ready, but twitch had this kind of issue with base64 decoding in some kind of servers. The GC would try to STW, but there would always be one or a few goroutines decoding base64 in a tight loop for the time STW was attempted, delaying it again and again.
Asynchronous preemption is a solution to this kind of issue. Load is not the issue here, as long as you go through the runtime often enough.
This sounds like a great idea on the paper, I think it can bring value. But once you enable this in an organization, if it works well, it could be hard to remove it without resentment
Thank you, you bring a very good point. I’m sure we’ll see more channels like TeamSays in the near future as Gen-Z goes into workforce/senior positions. From my perspective future is transparency oriented, and leadership embracing it before others can attract talent they need.
I wonder if the overhead of using context values is noticeable in the context of logging. I guess you want to reconstruct the logger only when needed, kind of lazily
It might be noticeable at extreme levels, which I've never come anywhere to noticing.
I tend to only use `log := clog.FromContext(ctx)` once at the top of a method, and not `clog.InfoContext(ctx, "...")`, but that's mostly for style reasons and not performance.
Indeed, starvation mode is just a way to ensure fairness, at the cost of surrendering control and being woken up later. Mutexes can be costly when there is a lot of competition, it is up to us to use it correctly
Who's "us"? In Go, the threads are a shared resource across all the code. Multiple libraries can be contending for a resource.
I've hit something like this in Wine's storage allocator. Wine has its own "malloc", not the Microsoft one. It has three nested locks, and the innermost one is a raw spinlock. When a buffer is resized ("realloc"), and a new buffer has to be allocated, the copying is done inside a spinlock. If you have enough threads doing realloc calls (think "push") the whole thing goes into futex congestion collapse and performance drops by two orders of magnitude.
A spinlock in user space is an optimistic assumption. It's an assumption that the wait will be very short. It's important to have something tracking whether that assumption is working. At some load level, it's time to do an actual lock and give up control to another thread. "Starvation mode" may be intended to do that. But it seems more oriented towards preventing infinite overtaking.
By “us”, I was referring to normal, everyday go devs (versus working on go itself).
I think starvation mode is a pragmatic way of solving the issue: overtaking happens when the fast path assumption is wrong - but absolute fairness is not a requirement. So it’s ok. The gains of the fast path is enough to justify having that starvation mode.
I would argue that if some piece of code ends up wasting lots of time on futex calls, it suggests that this code was not designed with the right use case in mind.
Interesting comment about wine’s allocator, did not know about this
I really like the idea. Having a bird's eye view of money flow is central to building and maintaining a strategy. I have been looking for this kind of tool for years, and currently am using a spreadsheet for this, as nothing worked for me. This simple diagram is so much clearer.
But you need to refine the UX, which is barely usable at the moment: it took me a few minutes to understand how to change the numbers. I am still not sure how to add pockets. The name pocket itself is not very descriptive to me, but I am not sure how I would call them. Maybe just « entity », and have a few variations of them (income, accounts, etc).
I would need pockets that take a percentage of another one, such as tax withholding.
This feedback is not structured, but I really like this idea and hope it will help
Hey thanks for the feedback!
The default budget on the home page does not have the add pocket/flow buttons, those are on the pages where you add your budgets are listed. But it totally makes sense to have them there as well, just added them! Thanks for noticing!
The naming is really hard, the where first called "nodes", but that was hard to understand for many. I habe to analyze this again, maybe you are onto something by just naming them income/expense/account/grouping etc.
The idea to take or send a percentage is very interesting, basically a generalization of how it currently works.
I think entities or nodes can be categorized as incoming, outgoing, and internal transfers.
Once you get this right, the natural next step is to take time into account. The current chart is the ideal moneyflow, but from months to months I’ll deviate from it. Taking time into account means that these deviations are saved and analyzed. It also means one can analyze or prospectively test new allocations: given average annual returns on investments, what would my savings look like in the future? How did it evolve over time?
That’s just my thoughts on it
vaultwarden, or bitwarden-rs as it used to be called, have been working flawlessly for years on my side, updates always work just as expected, and it supports a lot of organizational features too.
But I felt like it was better to trust bitwarden’s cloud for professional stuff, just for the reliability.