Hacker Newsnew | past | comments | ask | show | jobs | submit | blub's commentslogin

> I'm trying really hard to buy European made products and to use European services where possible.

European companies are trying even harder to outsource to China.

In the past months I’ve seen an increase and it feels like almost everything is made in China, from books to Christmas trinkets to clothes and kitchen utensils, it’s a pain the ass to find locally produced goods.

This has a lot to do with the energy crisis triggered by decoupling from Russia, which was never properly put into context and evaluated from an economical perspective.


QtBase is C++ first of all.

Massive projects like Qt also push compilers to their limits and use various compiler-specific and platform-specific techniques which might appear as bugs to Fil-C.


(Not the OP)

For many years, all the projects I’ve been in had mandatory code review, some in the form of PRs (a github fabrication), most as review requests in other tooling.

This applies to everything from platform code, configuration, tooling to production software.

Inside a component, we use review to share knowledge about how something was implemented and reach consensus on the implementation details. Depending on developer skill level, this catches style, design issues or even bugs. For skilled developers, it’s usually comments on code-to-architecture mismatches, understandability, etc. Sometimes not entirely objective things, that nevertheless contribute to developing and maintaining a team consensus and style. Discussions also happen outside and before review, but we’ve found reviews invaluable.

If a team has yearly turnover or different skill levels (typical for most teams), not reviewing every commit is sloppy. Which has an additional meaning now with AI slop :)


Doing a vehicle check-up is a pretty normal thing to do, although in my case the mandatory (EU law) periodic ones are happening often enough that I generally don’t have to schedule something out of turn.

The few times I did go to a shop and ask for a check-up they didn’t find anything. Just an anecdote.


There was an interesting video on YT where an engineer from a fastener company joined a carpenter to compare their products with traditional joints.

The traditional joints held up very well and even beat the engineered connectors in some cases. Additionally one must be careful with screws and fasteners: if they’re not used according to spec, they may be significantly weaker than expected. The presented screws had to be driven in diagonally from multiple positions to reach the specified strength; driving them straight in, as the average DIYer would, would have resulted in a weak joint.

Glue is typically used in traditional joinery, so less glue would actually have a negative effect.


> Glue is typically used in traditional joinery, so less glue would actually have a negative effect.

And a lot of traditional joinery is about keeping the carcase sufficiently together even after the hide glue completely breaks down so that it can be repaired.

Modern glues allow you to use a lot less complicated joinery.


Google have published a couple high-level Rust blog posts with many graphs and claims, but no raw data or proofs, so they haven’t demonstrated anything.

By now their claims keep popping up in Rust discussion threads without any critical evaluation, so this whole communication is better understood as a marketing effort and not a technical analysis.


> Google have published a couple high-level Rust blog posts with many graphs and claims, but no raw data or proofs, so they haven’t demonstrated anything.

Don't expect proofs from empirical data. What we have is evidence. Google has published far better evidence, in my view, than "we have this one CVE, here are a bunch of extrapolations".

> By now their claims keep popping up in Rust discussion threads without any critical evaluation,

Irrelevant to me unless you're claiming that I haven't critically evaluated the information for some reason.


> No one claims that good type systems prevent buggy software. But, they do seem to improve programmer productivity.

To me it seems they reduce productivity. In fact, for Rust, which seems to match the examples you gave about locks or regions of memory the common wisdom is that it takes longer to start a project, but one reaps the benefits later thanks to more confidence when refactoring or adding code.

However, even that weaker claim hasn’t been proven.

In my experience, the more information is encoded in the type system, the more effort is required to change code. My initial enthusiasm for the idea of Ada and Spark evaporated when I saw how much ceremony the code required.


> In my experience, the more information is encoded in the type system, the more effort is required to change code.

I would tend to disagree. All that information encoded in the type system makes explicit what is needed in any case and is otherwise only carried informally in peoples' heads by convention. Maybe in some poorly updated doc or code comment where nobody finds it. Making it explicit and compiler-enforced is a good thing. It might feel like a burden at first, but you're otherwise just closing your eyes and ignoring what can end up important. Changed assumptions are immediately visible. Formal verification just pushes the boundary of that.


In practice it would be encoded in comments, automated tests and docs, with varying levels of success.

It’s actually similar to tests in a way: they provide additional confidence in the code, but at the same time ossify it and make some changes potentially more difficult. Interestingly, they also make some changes easier, as long as not too many types/tests have to be adapted.


This reads to me like an argument for better refactoring tools, not necessarily for looser type systems. Those tools could range from mass editing tools, IDEs changing signatures in definitions when changing the callers and vice versa, to compiler modes where the language rules are relaxed.


I was thinking about C++ and if you change your mind about whether some member function or parameter should be const, it can be quite the pain to manually refactor. And good refactoring tools can make this go away. Maybe they already have, I haven’t programmed C++ for several years.


Constraints Liberate, Liberties Constrain. (I also recommend watching the presentation with the same title)


> All that information encoded in the type system makes explicit what is needed in any case and is otherwise only carried informally in peoples' heads by convention

this is, in fact better for llms, they are better at carrying information and convention in their kv cache than they are in having to figure out the actual types by jumping between files and burning tokens in context/risking losing it on compaction (or getting it wrong and having to do a compilation cycle).

if a typed language lets a developer fearlessly build a semantically inconsistent or confusing private API, then llms will perform poorer at them even though correctness is more guaranteed.


It is definitely harder to refactor Haskell than it is Typescript. Both are "safe" but one is slightly safer, and much harder to work with.


Capturing invariants in the type system is a two-edged sword.

At one end of the spectrum, the weakest type systems limit the ability of an IDE to do basic maintenance tasks (e.g. refactoring).

At the other end of the spectrum, dependent type and especially sigma types capture arbitrary properties that can be expressed in the logic. But then constructing values in such types requires providing proofs of these properties, and the code and proofs are inextricably mixed in an unmaintainable mess. This does not scale well: you cannot easily add a new proof on top of existing self-sufficient code without temporarily breaking it.

Like other engineering domains, proof engineering has tradeoffs that require expertise to navigate.


> but one reaps the benefits later thanks to more confidence when refactoring or adding code.

To be honest, I believe it makes refactoring/maintenance take longer. Sure, safer, but this is not a one-time only price.

E.g. you decide to optimize this part of the code and only return a reference or change the lifetime - this is an API-breaking change and you have to potentially recursively fix it. Meanwhile GC languages can mostly get away with a local-only change.

Don't get me wrong, in many cases this is more than worthwhile, but I would probably not choose rust for the n+1th backend crud app for this and similar reasons.


The choice of whether to use GC is completely orthogonal to that of a type system. On the contrary, being pointed at all the places that need to be recursively fixed during a refactoring is a huge saving in time and effort.


I was talking about a type system with affine types, as per the topic was Rust specifically.

I compared it to a statically typed language with a GC - where the runtime takes care of a property that Rust has to do statically, requiring more complexity.


In my opinion, programming languages with a loose type system or no explicit type system only appear to foster productivity, because it is way easier to end up with undetected mistakes that can bite later, sometimes much later. Maybe some people argue that then it is someone else's problem, but even in that case we can agree that the overall quality suffers.


"In my experience, the more information is encoded in the type system, the more effort is required to change code."

Have you seen large js codebases? Good luck changing anything in it, unless they are really, really well written, which is very rare. (My own js code is often a mess)

When you can change types on the fly somewhere hidden in code ... then this leads to the opposite of clarity for me. And so lots of effort required to change something in a proper way, that does not lead to more mess.


There’s two types of slowdown at play:

a) It’s fast to change the code, but now I have failures in some apparently unrelated part of the code base. (Javascript) and fixing that slows me down.

b) It’s slow to change the code because I have to re-encode all the relationships and semantic content in the type system (Rust), but once that’s done it will likely function as expected.

Depending on project, one or the other is preferable.


Or: I’m not going to do this refactor at all, even though it would improve the codebase, because it will be near impossible to ensure everything is correct after making so many changes.

To me, this has been one of the biggest advantages of both tests and types. They provide confidence to make changes without needing to be scared of unintended breakages.


There's a tradeoff point somewhere where it makes sense to go with one or another. You can write a lot of codes in bash and Elisp without having to care about the type of whatever you're manipulating. Because you're handling one type and encoding the actual values in a typesytem would be very cumbersome. But then there are other domain which are fairly known, so the investment in encoding it in a type system does pay off.


Soon a lot of people will go out of the way and try to convince you that Rust is most productive language, functions having longer signatures than their bodies is actually a virtue, and putting .clone(), Rc<> or Arc<> everywhere to avoid borrow-checker complaints makes Rust easier and faster to write than languages that doesn't force you to do so.

Of course it is a hyperbole, but sadly not that large.


Interesting, I was also thinking of the similarities with Jehovah’s witnesses. It’s as if they somehow got into the building, were offered a job and now want to convince everyone of the merits of technical salvation.

Rust the technology is not bad, even though it is still complicated like C++, has rather poor usability (also like C++) and is vulnerable to supply-chain attacks. But some of the people can be very irritating and the bad apples really spoil the barrel. There’s a commenter below gleefully writing that “C++ developers are spinning in their graves”. Probably slightly trolling and mentioning C++ doesn’t make sense in this kernel context, but such hostile, petty comments are not unheard of.


C++ devs don’t care what the Linux kernel’s written in.

But I did see an interesting comment from another user here which also reflects my feelings: Rust is pushed aggressively with different pressure tactics. Another comment pointed out that Rust is not about Rust programmers writing more Rust, but “Just like a religion it is about what other people should do.”.

I’ve been reading about this Rust-in-the-kernel topic since the beginning, without getting involved. One thing that struck me is the obvious militant approach of the rustafarians, criticizing existing maintainers (particularly Ts’o and other objectors), implying they’re preventing progress or out of touch.

The story feels more like a hostile takeover attempt than technology. I also think that many C or C++ programmers don’t bother posting in this topics, so they’re at least partially echo chambers.


That's how it feels to me. There are crucial issues, namely that there is no spec and there is only one implementation. I don't know why Linus is ok with this. I'd be fine with it if those issues were resolved, but they aren't.


> There are crucial issues, namely that there is no spec and there is only one implementation. I don't know why Linus is ok with this.

I can try to provide a (weak) steelman argument:

Strictly speaking, neither the lack of a spec nor a single implementation have been blockers for Linux's use of stuff, either now or in the past. Non-standard GCC extensions and flags aren't exactly rare in the kernel, and Linus has been perfectly fine with the single non-standard implementation of those. Linus has also stated in the past (paraphrasing) that what works in practice is more important than what standards dictate [0]. Perhaps Linus feels that what Rust does in practice is good enough, especially given its (currently) limited role in the kernel.

Granted, not having a spec for individual flags/features is not equivalent to not having a spec for the language, so it's not the strongest argument as I said. I do think there's a point nestled in there though - perhaps what happens on the ground is the most important factor. It probably helps that there is work being done on both the spec and multiple implementation fronts.

[0]: https://lkml.org/lkml/2018/6/5/769


Business and Enterprise plans have a no-training-on-your-data clause.

I’m not sure personal Claude has that. My account has the typical bullshit verbiage with opt-outs where nobody can really know whether they’re enforceable.

Using a personal account is akin to sharing the company code and could get one in serious trouble IMO.


You can opt-out of having your code being trained on. When Claude Code first came out Anthropic wasn't using CC sessions for training. They started training on it starting from Claude Code 2 that came out with Sonnet 4.5. User is asked on first use whether to opt-in or out of training.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: