Hacker Newsnew | past | comments | ask | show | jobs | submit | tryfinally's commentslogin

I always wonder whether C++ syntax ever becomes readable when you sink more time into it, and if so - how much brain rewiring we would observe on a functional MRI.


It does... until you switch employers. Or sometimes even just read a coworker's code. Or even your own older code. Actually no, I don't think anyone achieved full readability enlightenment. People like me just hallucinated it after doing the same things for too long.


Sadly, that is exactly my experience.


And yet, somehow Lisp continues to be everyone's sweetheart, even though creating literal new DSLs for every project is one of the features of the language.


Lisp doesnt have much syntax to speak of. All of the DSLs use the same basic structure and are easy to read.

Cpp has A LOT A of syntax: init rules, consts, references, move, copy, templates, special cases, etc. It also includes most of C, which is small but has so many basic language design mistakes that "C puzzles" is a book.


The syntax and the concepts (const, move, copy, etc) are orthogonal. You could possibly write a lisp / s-exp syntax for c++ and all it would make better would be the macros in the preprocessor. The DSL doesn't have to be hard to read if it uses unfamiliar/uncommon project specific concepts.


Yes, sure.

What i mean is that in cpp all the numerous language features are exposed through little syntax/grammar details. Whereas in Lisps syntax and grammar are primitive, and this is why macros work so well.


It's because DSLs there reduce cognitive load for the reader rather than add up to it.


Well-designed abstractions do that in every language. And badly designed ones do the opposite, again in all languages. There's nothing special about Lisp here


Sure but it's you who singled out Lisp here. The whole point of DSL is designing a purpose formalism that makes a particular problem easy to reason about. That's hardly a parallel to ever-growing vocabulary of standard C++.


I continue to believe Lisp is perfect, despite only using it in a CS class a decade ago. Come to think of it, it might just be that Lisp is a perfect DSL for (among other things) CS classes…


In my opinion, C++ syntax is pretty readable. Of course there are codebases that are difficult to read (heavily abstracted, templated codebases especially), but it's not really that different compared to most other languages. But this exists in most languages, even C can be as bad with use of macros.

By far the worst in this aspect has been Scala, where every codebase seems to use a completely different dialect of the language, completely different constructs etc. There seems to have very little agreement on how the language should be used. Much, much less than C++.


Scala is a meta language. It's really a language construction toolkit in a box.


It does get easy to read, but then you unlock a deeper level of misery which is trying to work out the semantics. Stuff like implicit type conversions, remembering the rule of 3 or 5 to avoid your std::moves secretly becoming a copy, unwittingly breaking code because you added a template specialization that matches more than you realized, and a million others.


This is correct - it does get easy to read but you are constantly considering the above semantics, often needing to check reference or compiler explorer to confirm.

Unless you are many of my coworkers, then you blissfully never think about those things, and have Cursor reply for you when asked about them (-:


"using namespace std;" goes a long way to make C++ more readable and I don't really care about the potential issues. But yeah, due to a lack of a nice module system, this will quickly cause problems with headers that unload everything into the global namespace, like the windows API.

I wish we had something like Javascript's "import {vector, string, unordered_map} from std;". One separate using statement per item is a bit cumbersome.


Standard library modules: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p24...

I have thoroughly forgotten which header std::ranges::iota comes from. I don't care either.


Last time I tried, modules were a spuriously supported mess. I'll give them another try once they have ironclad support in cmake, gcc, clang and Visual Studio.


This is just a low-effort comment.

> whether C++ syntax ever becomes readable when you sink more time into it,

Yes, and the easy approach is to learn as you need/go.


It's very readable, especially compared to Rust.


I love how the haters of Rust's syntax can be roughly divided into two groups:

(1) Why doesn't it look like C++?

(2) Why does it look so much like C++?


You add PolySharp to your source generator project to get back some of the modern C# features. https://github.com/Sergio0694/PolySharp


I do have an inner monologue, but I do make many decisions non-verbally. I often visualize actions and their consequences, in the context of my internal state. When I’m thirsty I consider the drinks available nearby and imagine their taste. In the morning coffee feels most tempting, unless I’ve already had a few cups - in that case drinking more would leave me feeling worse, not better. After a workout, a glass of water is the most expedient way to quench the thirst. It is similar when I write a piece of code or design a graphic. I look at the code and consider various possible transformations and additions, and prefer ones that move me closer to my goal, or at least make any sort of improvement. It’s basically a weighing of imagined possible world-states (and self-states), not a discussion.

I struggle to imagine how people can find the time to consider all of these trivial choices verbally - in my case it all happens almost instantaneously and the whole process is easy to miss. I also don’t see what the monologue adds to the process - just skip this part and make the decision!

That said, I do use an inner voice when writing, preparing what to say to someone, etc. and I feel like I struggle with this way of thinking much more.


I had this for the longest time. Very imbalanced academic performance because I could get the answer and understood a lot of things, but had huge trouble with written work. That is, converting the thought process into a linear stream of words and sentences. I suppose it's like serialization of objects in memory.

Edit: maybe this is like the difference between a diffusion model and a "next token" model. I always feel a need to jump around and iteratively refine the whole picture at once. Hard to maintain focus.


But taking a step back, this process of converting reasoning tied to experienced consequences into words that have relatively stable meaning and interpretation over generations is what is "academic".

Without that, one does not learn quickly what another human already thought and tried out in the past (2 hours or 2 years or 2 millenia ago, does not matter), the civilization never progresses to the point it has, and we reinvent all the same things repeatedly ("look ma, I strapped a rock to a stick and now I can bash lion's head in").

So really, if you struggle with this part of the process, you'd need to rely on somebody else who can understand your "invention" as well as you do, and can do a good job of putting it into words.

Really, this is what makes the academic process, well, academic.


The top-level comment tried to distinguish betweeen symbolic processing — verbal and non-verbal — as really being "thought", and other cognition/reasoning not.

I believe many of the things you bring up still involve symbolic reasoning (eg. how do you decide when is too much coffee if you do not think in representation of "I had N or too-many"? how do you consider code transformations unless you think in terms of the structure you have and you want to get to?).

It's no surprise that one is good with one language and sucks at the other, though: otherwise, we'd pick up new languages much faster. And not struggle as much with different types of languages as much (both spoken — think tonal vs not, or Hungarian vs anything else ;) — and programming — think procedural vs functional).

So spoken/written languages are one symbolic way to express our internal cognition, but even visual reasoning can be symbolic (think non-formal and formal flowcharts, graphs, diagrams... eg. things like UML or algorithm boxes use precisely defined symbols, but they don't have to be as precise to be happening).

It is a question if it is useful to make a distinction between all reasoning and that particular type of reasoning, and reuse a common, related word ("thinking", "thought"), or not?


Thanks, this made my day! No wonder your pitch was a success.


I've been self-hosting this for years now, works flawlessly.


Same, although in the end I figured I'll give BitWarden my money, as it's more than cheap enough.


My own "justification" is that while self-hosted is my main personal store, I maintain a paid but empty account with Bitwarden. Said empty account on their servers is to be the emergency access person for family members' personal official vaults. So, they get some money for a license out of me, without the server usage.


Ah nice, that's a good solution.


I subscribed for two reasons: 1. To support their efforts and 2. They accept Bitcoin (and of course I paid with Bitcoin although the whole payment processing was garbage).


Bitcoin really isn't that anonymous


The parent didn't say they use Bitcoin for that feature though?


Similar here... I also trust them to maintain their service slightly better than I trust myself to do so. I like supporting the project in general as well.


Same here too for personal/family vaults. Have been using the bitwarden cloud offering in professional context too.

vaultwarden, or bitwarden-rs as it used to be called, have been working flawlessly for years on my side, updates always work just as expected, and it supports a lot of organizational features too.

But I felt like it was better to trust bitwarden’s cloud for professional stuff, just for the reliability.


I did too, and liked it until it taught me a valuable lesson about self hosting things. I started using the project while it was still called bitwarden-rs. Apparently they were told to rename by Bitwarden (understandable).

My setup was based on their Docker images, and thinking it was the safest option I had set up Watchtower to automatically update to the latest image nightly to get the latest security patches. But then I discovered that the bitwarden-rs image had not been updated for _months_ because of the rename.

So basically I was hosting my whole password database in this, and I had suddenly lost security updates without realizing.

Btw, I'm not blaming neither Vaultwarden or Bitwarden. But if you're going to self-host something this security critical, just be sure that you definitely monitor it _manually_ to make sure you're not on some unpatched vulnerable version some months down the line.


Please be careful with Watchtower. Its update functionally can not (by design) separate your ENV settings and the ones from the new container.

E.g. you deploy with DATABASE_URL=x

This becomes DATABASE_URL=x PYTHON=3.0.0

You did not set the Python one, the image did via ENV.

Now a new version comes out with PYTHON=3.1.0.

Watchtower doesn't know which ENVs you set and which ones came from the container as docker inspect exposes them in the same way.

So now Watchtower deploys the new version (which only has Python 3.1.0) with DATABASE_URL=x PYTHON=3.0.0.

And stuff stops working.

I use an ansible playbook which maintains the only ENV vars that need to survive an update.


I've seen watchtower burn so many people...

Better to put everything in git and run your own renovate bot which will create PRs for you to review and also pull in the changelogs to the PR itself so you can check for breaking changes.


Just set Watchtower to quit after run and run it manually when convenient. This way, you’ll instantly know if some update went wrong and can fix it.


Do you have an example online? Always interested to see different approaches.


Same, its become invaluable for me.

+ I really enjoy this era of self-hosted tools.


Same, it’s been great honestly.


> Oh yeah and if the interface removed a method and you didn’t realise you might be dragging that useless methods for a long while. Then again it’s not like your Java-style interface is any different.

In C# I usually use explicit interface implementations. (They're inconvenient to type, but Rider has a macro for it.) When the interface changes or disappears, my code won't compile.


Didn’t know that existed. It’s a bit of a half assed version of half of type classes but it’s an improvement that this is at least available.

Can you also implement interfaces on types you didn’t define?


As a game developer, I definitely distinguish high-risk and low-risk parts of the code base.

There's code that can be allowed to fail, and furthermore, it will eventually fail due to the sheer amount of this code, the development time constraints, the number of possible game states, etc. I don't care that this code rarely fails under some arcane conditions, because this simply causes some button to stop working, some NPC to stop moving, but the game will remain playable. Even if the player notices the bug, they'll just shrug and keep playing. My aim is to make sure that the game recovers and returns to a healthy state after the level/save is reloaded. (Obviously, I'd like to fix/avoid every single possible bug, but it's impossible in practice. You'll have more luck continuously tracking in your head how dangerous the code you're working on is. Also, you rarely have the luxury of being the only programmer on the team. Bugs will happen.)

The other kind of code is the core game system stuff, the low level stuff, the error handling stuff, the memory stuff, the pointer stuff. You must pay special attention while working on this code, because failures will straight up crash the process or bring the game into an irrecoverably broken state (eg. all objects stop updating, stuck in some menu, the player never respawns...). Bugs like these are also highly prioritized by management. My update loop needs to be shiny.

Such is the reality of working on complex systems (or simple object-oriented programs ;))


TBF, I can't name more than a couple dozen sets of game software that I would consider "quality". That shit is generally developed on a deadline and it shows.


Well, yes, deadlines are best practice.


You can store it and move it around, but arithmetic operations are prohibitively expensive without hardware acceleration.

(Note that bfloat16 has a different range than float16, so you can't interpret one as the other)


Oh, I should have clarified - could one start with a bfloat16 on software-side, convert to float16 (so that e.g a 3.4E38 float16 becomes a 65504 float16), then do any "heavy math" in fast hardware float16 instructions, and then convert back at the end?


Nothing necessarily wrong with that code, but it also kinda smells. Why even store it as bfloat16 at all? You risk getting the numerical disadvantages of both float16-representation, and none of the advantages.


Funny coincidence, my subscription ends today. I’m switching to self employment and won’t be able to afford the service for a while. I wish it were more affordable, because the freeing feeling of not having to rely on google is irreplaceable.


When I quit my job, I paused my Privacy card that was paying for Kagi—but before the renewal date, I realized it was totally worth 2 coffees/month and unpaused. I use it all the time.


Marginal coffees are very different quality and value across individuals making this a poor metric. For yourself, unless you are actually giving up those coffees (which I think is rare among the people who use this phrase) are you really sure that you value it at that rate?

Even as a business, a subscription would be weighed differently to coffee money - you can easily scale the coffees up and down based on how things are going, this is not true with subscriptions, which essentially increase your fixed costs.

Coffees come out of very different mental budget lines to search tools.


Irreplaceable, until you're self (/un) employed.


It was still irreplaceable, since it was not replaced. (google did not replace the freeing feeling)


I suppose you're right-- the feeling was irreplaceable, but I guess the service itself not so much.


I’ll probably give DDG another go, even though their result quality would always eventually prove too annoying in practice.

By the way, I have no idea what your point is.


I shipped mine to a friend who forwarded the package to me. They figured out what I'm doing, and I got an e-mail warning me that they'll ship the laptop, but I won't get warranty outside of the country they're shipping to, so not sure I'd recommend doing this.

Worth it, though. It's one hell of a laptop.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: