I wonder if we'll get to "VI for LLMs" - if the model was trained on using that kind of text navigation and you show context around cursor when it navigates.
Would also be worth having special tokens for this kind of navigation.
IMHO D just missed the mark with the GC in core. It was released in a time where a replacement for C++ was sorely needed, and it tried to position itself as that (obvious from the name).
But by including the GC/runtime it went into a category with C# and Java which are much better options if you're fine with shipping a runtime and GC. Eventually Go showed up to crowd out this space even further.
Meanwhile in the C/C++ replacement camp there was nothing credible until Rust showed up, and nowadays I think Zig is what D wanted to be with more momentum behind it.
Still kind of salty about the directions they took because we could have had a viable C++ alternative way earlier - I remember getting excited about the language a lifetime ago :D
I'd rather say that the GC is the superpower of the language. It allows you to quickly prototype without focusing too much on performance, but it also allows you to come back to the exact same piece of code and rewrite it using malloc at any time. C# or Java don't have this, nor can they compile C code and seamlessly interoperate with it — but in D, this is effortless.
Furthermore, if you dig deeper, you'll find that D offers far greater control over its garbage collector than any other high-level language, to the point that you can eagerly free chunks of allocated memory, minimizing or eliminating garbage collector stops where it matters.
Do you know of any popular real-time (for some definition of real-time) applications written in D? Like, streaming music or video? C has FFmpeg [0]:
> FFmpeg is proudly written in the C programming language for the highest performance. Other fashionable languages like C++, C#, Rust, Go etc do not meet the needs of FFmpeg.
How does D perform in benchmarks against other programming languages?
D by definition meets the FFmpeg's criteria because it's also a C compiler. Because of that I never wondered how D performs in the benchmarks, as I know for sure that it can give me the performance of C where I need it.
> C# or Java don't have this, nor can they compile C code and seamlessly interoperate with it — but in D, this is effortless.
C# C interop is pretty smooth, Java is a different story. The fact that C# is becoming the GC language in game dev is proving my point.
>Furthermore, if you dig deeper, you'll find that D offers far greater control over its garbage collector than any other high-level language, to the point that you can eagerly free chunks of allocated memory, minimizing or eliminating garbage collector stops where it matters.
Yes, and the no-gc stuff was just attempts to backpedal on the wrong initial decision to fit into the use-cases they should have targeted from the start in my opinion.
Look D was an OK language but it had no corporate backing and there was no case where it was "the only good solution". If it was an actual C++ modernization attempt that stayed C compatible it would have seen much better adoption.
True, but you still need to either generate or manually write the bindings. In D, you just import the C headers directly without depending on the bindings' maintainers.
> If it was an actual C++ modernization attempt that stayed C compatible it would have seen much better
Any D compiler is literally also a C compiler. I sincerely don't know how can one be more C compatible than that.
> Yes, and the no-gc stuff was just attempts to backpedal on the wrong initial decision
I think that it was more of an attempt to appease folks who won't use GC even with a gun to their head.
I'm not saying D didn't have nice features - but if D/C#/Java are valid options I'm never picking D - language benefits cannot outweigh the ecosystem/support behind those two. Go picked a niche with backend plumbing and got Google backing to push it through.
Meanwhile look at how popular Zig is getting 2 decades later. Why is that not D ? D also has comp-time and had it for over a decade I think ? Zig proves there's a need that D was in the perfect spot to fill if it did not make the GC decision - and we could have had 2 decades of software written in D instead of C++ :)
> D was in the perfect spot to fill if it did not make the GC decision
I just find it hard to believe that the GC is the one big wart that pushed everyone away from the language. To me, the GC combined with the full power of a systems language are the killer features that made me stick to D. The language is not perfect and has bad parts too, but I really don't see the GC as one of them.
> The fact that C# is becoming the GC language in game dev is proving my point.
That is just the Unity effect. Godot adopted C# because they get paid to do so by Microsoft.
C# allows for far lees control over the garbage collection compared to D. The decision to use C# is partly responsible for the bad reputation of Unity games as it causes a lot of stutters when people are not very careful about how to manage the memory.
The creator of the Mono runtime actually calls using C# his Multi-million dollar mistake and instead works on swift bindings for Godot: https://www.youtube.com/watch?v=tzt36EGKEZo
> The fact that C# is becoming the GC language in game dev is proving my point.
Respectfully, it doesn't prove your point. Unity is a commercial product that employed C# because they could sell it easily, not because it's the best language for game dev.
Godot supports C# because Microsoft sponsored the maintainers precisely on that condition.
1. Runtime: A runtime is any code that is not a direct result of compiling the program's code (i.e. it is used across different programs) that is linked, either statically or dynamically, into the executable. I remember that when I learnt C in the eighties, the book said that C isn't just a language but a rich runtime. Rust also has a rich runtime. It's true that you can write Rust in a mode without a runtime, but then you can barely even use strings, and most Rust programs use the runtime. What's different about Java (in the way it's most commonly used) isn't that it has a runtime, but that it relies on a JIT compiler included in the runtime. A JIT has pros and cons, but they're not a general feature of "a runtime".
2. GC: A garbage collector is any mechanism that automatically reuses a heap object's memory after it becomes unreachable. The two classic GC designs, reference counting and tracing, date back to the sixties, and have evolved in different ways. E.g. in the eighties and nineties there were GC designs where the compiler could either infer a non-escaping object's lifetime and statically insert a `free` or have the language track lifetimes ("regions", 1994) and have the compiler statically insert a `free` based on information annotated in the language. On the other hand, in the eighties Andrew Appel famously showed that moving tracing collectors "can be faster than stack allocation". So different GCs employ different combination of static inference and dynamic information on object reachability to optimise for different things, such as footprint or throughput. There are tradeoffs between having a GC or not, and they also exist between Rust (GC) and Zig (no GC), e.g. around arenas, but most tradeoffs are among the different GC algorithms. Java, Go, and Rust use very different GCs with different tradeoffs.
So the problem with using the terms "runtime" and "GC" colloquially as they're used today is not so much that it differs from the literature, but that it misses what the actual tradeoffs are. We can talk about the pros and cons of linking a runtime statically or dynamically, we can talk about the pros and cons of AOT vs. JIT compilation, and we can talk about the pros and cost of a refcounting/"static" GC algorithm vs a moving tracing algorithm, but talking in general about having a GC/runtime or not, even if these things mean something specific in the colloquial usage, is not very useful because it doesn't express the most relevant properties.
Op saying Rust has a kind of GC is absurd. Rust keeps track of the lifetime of variables and drops them at the end of their lifecycle. If you really want to call that a GC you should at least make a huge distinction that it works at compile time: the generated code will have drop calls inserted without any overhead at runtime. But no one calls that a GC.
You see OP is trying to murk the waters when they claim C has a runtime. While there is a tiny amount of truth to that, in the sense that there’s some code you don’t write present at runtime, if that’s how you define runtime the term loses all meaning since even Assemblers insert code you don’t have to write yourself, like keeping track of offsets and so on.
Languages like Java and D have a runtime that include lots of things you don’t call yourself, like GC obviously, but also many stdlib functions that are needed and you can’t remove because it may be used internally. That’s a huge difference from inserting some code like Rust and C do.
To be fair, D does let you remove the runtime or even replace it. But it’s not easy by any means.
> If you really want to call that a GC you should at least make a huge distinction that it works at compile time: the generated code will have drop calls inserted without any overhead at runtime. But no one calls that a GC.
Except for the memory management literature, because it's interested in the actual tradeoffs of memory management. A compiler inferring lifetimes, either automatically for some objects or for most objects based on language annotations, has been part of GC research for decades now.
The distinction of working at compile time or runtime is far from huge. Working at compile time reduces the work associated with modifying the counters in a refcounting GC in many situations, but the bigger differences are between optimising for footprint or for throughput. When you mathematically model the amount of CPU spent on memory management and the heap size as functions of the allocation rate and live set size (residency), the big differences are not whether calling `free` is determined statically or not.
So you can call that GC (as is done in academic memory management research) or not (as is done in colloquial use), but that's not where the main distinction is. A refcounting algorithm, like that found in Rust's (and C++'s) runtime is such a classic GC that not calling it a GC is just confusing.
My (likely unfair) impression of D is that it feels a bit rudderless: It is trying to be too many things to too many people, and as a consequence it doesn't really stand out compared to the languages that commit to a paradigm.
Do you want GC? Great! Do not want GC? Well, you can turn it off, and lose access to most things. Do you want a borrow-checker? Great, D does that as well, though less wholeheartedly than Rust. Do you want a safer C/memory safety? There's the SafeD mode. And probably more that I forget.
I wonder if all these different (often incompatible) ways of using D ends up fragmenting the D ecosystem, and in turn make it that much harder for it to gain critical mass
> My (likely unfair) impression of D is that it feels a bit rudderless
The more positive phrasing would be that it is a very pragmatic language. And I really like this.
Currently opinionated langues are really in vogue. Yes they are easier to market but I have personally very soured on this approach now that I am a bit older.
There is not one right way to program. It is fun to use on opinionated language until you hit a problem that it doesn't cover very well and suddenly you are in a world of pain. I like languages that give me escape hatches. That allow me to program they way I want to.
>My (likely unfair) impression of D is that it feels a bit rudderless: It is trying to be too many things to too many people, and as a consequence it doesn't really stand out compared to the languages that commit to a paradigm.
My (likely unfair) impression of D is that it feels a bit rudderless: It is trying to be too many things to too many people, and as a consequence it doesn't really stand out compared to the languages that commit to a paradigm.
This can very clearly be said about C++ as well, which may have started out as C With Classes but became very kitchen sinky. Most things that get used accrete a lot of features over time, though.
FWIW, I think "standing out" due to paradigm commitment is mostly downstream of "xyz-purity => fewer ways to do things => have to think/work more within the constraints given". This then begs various other important questions, of course.. E.g., do said constraints actually buy users things of value overcoming their costs, and if so for what user subpopulations? Most adoption is just hype-driven, though. Not claiming you said otherwise, but I also don't think the kind of standing out you're talking about correlates so well to marketing. E.g., browsers marketed Javascript (which few praised for its PLang properties in early versions).
Re: the point about Zig: Especially considering I used and played a lot with D's BetterC model when I was a student, I wonder as a language designer what Walter thinks about the development and rise in popularity of Zig. Of course, thinking "strategically" about a language's adoption comes off as Machiavellian in a crowd of tinkers/engineers, but I can't help but wonder.
Zig got too much in to avoiding "hidden behavior" that destructors and operator overloading were banned. Operator overloading is indeed a mess, but destructors are too useful. The only compromise for destructors was adding the "defer" feature. (Was there ever a corresponding "error if you don't defer" feature?)
FIl-C, the new memory-safe C/C++ compiler actually achieved that through introducing a GC, with that in mind I'd say D was kind of a misunderstood prodigy in retrospect.
There's two classes of programs - stuff written in C for historic reasons that could have been written in higher level language but rewrite is too expensive - fill c.
Stuff where you need low level - Rust/C++/Zig
FillC works fine with all C code no matter how low level. There’s a small performance overhead but for almost every scenario it’s an acceptable overhead!
I get where you're coming from and if this was a package I'd agree - but having this built in/part of the tooling is nice - one less dependency - bash isn't as ubiquitous as you assume.
They are using this to virtue signal - but in reality it's just not compatible with their businesses model.
Anthropic is mainly focusing on B2B/Enterprise and tool use cases, in terms of active users I'd guess Claude is distant last, but in terms of enterprise/paying customers I wouldn't be surprised if they were ahead of the others.
Yeah you can't predict anything with 100% certainty either
By repeating propaganda at you though desperate financiers can hack your brains innate prediction loop to convince you you're knocking on the door of infamy.
Look, I get you. You're trying to fill the hole created when father never came back with cigarettes. Mom always blamed you for his leaving. But little Warboy screaming "Witness me make line go up !" everyone else is a self selecting meat suit too working unintentionally (simply distracted by their own lives needs they never encounter your pitch) and in some instances intentionally (fomenting economic and political instability) against you to support themselves.
> Tesla is clearly benefiting from protectionism and its sales would collapse if BYD were allowed to openly sell in the US
So would most of EU car makers in Europe. China is not playing by the same rules and everyone with car manufacturing domestically is slamming them with tariffs.
How isn't China playing by the same rules? Every country subsidises and supports industry it thinks is important, surely nothing would stop Germany from investing into Volkswagen and BMW or the US from investing into Ford the same way China invests into BYD?
Environmental regulations around rare earth minerals needed for the batteries. China loosens them thus making it cheaper to mine which starves out all global competition that actually has tighter regulations which protect the environment.
Then of course there is cost of living and salary; both of which are lower in China compared to where most legacy auto manufacturers are.
So China can pay their employees less and pollute the environment more in order to create an affordable, very high quality vehicle.
I can understand a small amount of tariffs to help "even the playing field" but not the 100% tariff or whatever was proposed against BYD
Hm, how are tariffs state subsidies? They're a tax on some products to give other products a competitive edge, but that feels different from a subsidy?
And what does that have to do with China playing by different rules than the west?
If not for the tariffs, the domestic company would have to charge lower prices to make sales. Thus tariffs provide domestic companies with additional revenue from domestic consumers.
Tariffs and subsidies both help companies succeed, but they're not the same thing. For one, tariffs can only really help your country's companies be competitive within your country. Subsidies can help your companies be competitive globally.
I can't say for sure because I don't know the full situation, but I've heard a similar story a few times and IMNSHO looking at it as "they made the wrong choice initially" is off because it assumes that a C# team could have delivered the initial version in the timeline required to get to the the next level of project.
If they had a team that knows nest and can iterate fast with it that's the perfect choice. I've worked at multiple agencies over the years and was mostly in C# teams. Whenever a C# team got on greenfield/startup projects it ended up being a shit-show of self inflicted slowdowns/over-complications. I've seen 3 projects where in large part due to slow development of the MVP the project missed investment window or ran out of funds.
From my experience Node/Rails teams were much more capable of delivering shit that works. It would eventually have a bunch of problems that would be non issues by default on a different stack like ASP.NET, but the difficulty of getting to that point and realizing that just being in that situation is a win is what most engineers miss.
> ...because it assumes that a C# team could have delivered the initial version in the timeline required to get to the the next level of project.
I linked the Nest.js project because you can see how similar these two are[0] with Nest.js leaning into the OOP aspects of TS at the very core. Controllers and services are classes in Nest.js, for example. It uses a somehow more complicated DI system than ASP.NET. It is Spring Boot or ASP.NET, but without the maturity, performance, and ergonomics.
The team had developers that had done C# before and later hires also included former C# developers.
TS itself was new for the team at the time and several mistakes were made along the way resulting in a "dual-ORM" situation (Prisma + Drizzle; both with their faults) that ends up sapping a lot of productivity (one of the drivers to move to C#).
Hmmm. I have a different take there: when you are young and wild, you achieve stuff because you think later and instantly produce code. When you turn older, you do it the other way leading to your example.
In the early 2000s I have been in a startup and we delivered rapidly in C# as we did in PHP. We just coded the shit.
I think what you said is a healthy progression : write dumb code -> figure out it doesn't scale -> add a bunch of clever abstraction layers -> realize you fucked yourself when you're on call for 12 hours trying to fix a mess for a critical issue over the weekend x how many time it takes you to get it -> write dumb code and only abstract when necessary.
Problem is devs these days start from step two because we're teaching that in all sources - they never learned why it's done by doing step one - it's all theoretical example and dogma. Or they are solving problems from Google/Microsoft/etc. scale and they are a 5 people startup. But everyone wants to apply "lessons learned" by big tech.
And all this advice is usually coming from very questionable sources - tech influencers and book authors. People who spend more time talking about code than reading/writing it, and earn money by selling you on an idea. I remember opening uncle bobs repo once when I was learning clojure - the most unreadable scattered codebase I've seen in the language. I would never want to work with that guy - yet Clean Code was a staple for years. DDD preachers, Event driven gurus.
C# is the community where I've noticed this the most.
Same experience here. C# might have superior tooling, performance, whatever but the OOP baggage is too heavy. In theory you can write something else than a giant over-complicated, over-abstracted pile of OOP nonsense in C#, but every team I've seen has.
You can write very functional C#. Our codebase is a mix with some aspects being functional (`ErrorOr`[0] being a big part of it as well as `OneOf`[1]) and OOP. Our core, common services all return `ErrorOr` to allow fluent call chaining at the top of the stack (controllers).
Modern C# features like `switch` expressions[2] (not `switch-case`) and pattern matching mean that it is possible to write very terse, expressive code. Extension members and methods[3] go a long way as well by making the builder pattern easier to implement.
Overall, it's up to the team to make use of the tools provided by the C# team. You can write C# in a very OOP heavy way (as is possible with TS in the case of Nest.js); you can also write in in a very functional way given many of the functional features adopted from F# over the years. It's up to the team.
That's a perfect example of making it overcomplicated, just in the FP direction.
C# uses exceptions for error handling. It has it's own control flow primitives. C# developers know how to work with it, everything else uses it. Why would I want to pull in a randos GH DSL and types to pretend I'm writing F# when I can just use F# that has first class support for this ?
C# is already very functional when it comes to Linq. In real world use, this just feels like extending in the same directionality while still preserving familiarity. Whereas F# does feel like an entirely different language.
Learning C# when you know TS is like learning Portuguese when you know Spanish.
> In theory you can write something else than a giant over-complicated, over-abstracted pile of OOP nonsense in C#, but every team I've seen has.
C# syntax is fine, but has a rotten[1] culture/conventions. I suppose it makes sense that Microsoft's "Java-killer" became enterprise-y, with the same over-engineered indirections.
1. IMO - I find it very unpleasant and never allowed myself to Internalize the IConventions out of spite. YMMV.
I think by the time people are willing to spend that kind of money we'd have to be in AGI territory and at that point any economic bets are off - investing in AI feels strange for that reason - I don't see a world where the slop becomes valuable enough to cover current investment levels. And I don't see a world in which investments matter if we get superhuman AGI.
As much as I love the idea of moving to Linux - Mac hardware is like two years ahead of PC currently in pretty much any regard aside from gaming. I keep looking for an iteration where it makes sense to switch but currently the intel core 3 stuff is at best comparable to M5 base. Strix Halo is much more power hungry and also not that impressive other than having a bunch of cores. Nothing comes close to the pro/max chips in M4 series. And with RAM/storage pricing Apple upgrades are looking reasonably priced (TBD when M5 Pro devices launch).
So I can either get a top tier tool when I upgrade this year or I can buy a subpar device, and the power management is going to likely be even worse on Linux.
I think this mostly only holds if you use local compute in a portable form factor.
Most of my personal development these days is done on my home server - 9995wx, 768GB, rtx 6000 pro blackwell GPU in headless mode. My work development happens in a cloud workstation with 64 cores and 128GB of ram but builds are distributed and I can dial up the box size on demand for heavier development.
I use laptops practically entirely as network client devices. Browser, terminal window, perhaps a VS Code based IDE with a remote connection to my code. Tailscale on my personal laptop to work anywhere.
I'm not limited by local compute, my devices are lightweight, cheap(ish) and replaceable, not an investment.
I'd like to use this kind of setup but unfortunately every time I try there's just soo many annoying edge cases that are wasting my time. Especially when I need to do FE/Mobile - but even BE has gotchas. I guess it depends on your environment - I'll try making this setup work sometimes in the next few months again.
> Mac hardware is like two years ahead of PC currently in pretty much any regard aside from gaming
and any contemporary ergonomics. Seriously, macbooks are an environmental hazard at this point: ultra glossy screen, hand twisting keyboard, wrist cutting sharp edges, lack of modern surge protections etc. etc. I genuinely don't understand this sentiment that macbook hardware is good.
So whatever resources you have, Apple will use them mostly to render 3D glass effects. With Debian (Xfce), I can't speak for other desktop environments, you need roughly three times fewer resources to run the OS itself.
Apple is disabling downgrading across all of iOS, and starting to do the same with MacOS. So you need to keep old hardware to run older MacOS versions, and it's only a matter of a few years before Tahoe is the latest OS you can run on your Mac.
Oh, I must be clear here: I'm not considering M1 Macs or later, since Apple closed the ecosystem with Apple Silicon.
What you did is a downgrade in what's called the supported OS.
However, if you decide to downgrade to Catalina on an M1 Mac, it's not possible — Big Sur is the earliest version that runs on Apple Silicon.
Anyway, you cannot downgrade to a macOS version older than what your Mac originally came with. So if you buy a Mac now, Tahoe will be the minimum option.
Old Macs can certainly be downgraded. iOS doesn’t allow it though and they pulled the latest security update which fucking sucks. And if you buy a M5, Tahoe is the only OS that’s available.
I have nothing against old Macs and MacOS, but I certainly won't be buying anything since the Apple Silicon switch, because now only Apple controls which OS you can run.
The requested correction was on the "iOS has been locked down on downgrading since forever" part and another reply had already provided one for M5 being sold (though only pros seemingly) :)
That's a very temporary solution to be fair. KDE and even, shudder, Gnome put mac os and windows to shame when it comes to responsiveness, performance, and resource usage.
I mean, KDE does 3x the stuff for 1/3 the cost. That's more memory and CPU for your IDE or, more likely, chrome tabs.
Would also be worth having special tokens for this kind of navigation.
reply