Likewise interested in the authoritative answer, but: if I needed to write a decent chunk of code that had to run as close to wire/CPU limits as possible and run across popular mobile and desktop platforms I would 100% reach for Rust.
Go has a lot of strengths, but embedding performance-critical code as a shared library in a mobile app isn't among them.
I think that any kind of “modern ops” necessarily includes coding, even if there isn’t a ton of Python or Rust being generated as part of the workflow.
Kubernetes deployment configurations and Ansible playbooks are code. PromQL is code. Dockerfiles and cloud-init scripts are code. Terraform HCL is code.
It’s all code I personally hate writing, but that doesn’t make it less valid “software development” than (say) writing React code.
I think you have it backwards. Systems engineering is the big picture discipline of designing & managing complex systems while config management is a specific process within that.
AdaFruit and SparkFun both provide MCUs, sensors, and other peripherals that integrate well. Couple that with copious libraries and example projects and you may be up and running without having to stare at data sheets and wiring diagrams and JTAG output just to (say) get a temperature reading and display it on a tiny OLED screen.
All of that plus maintaining inventory nearer their customers, doing effective QC on units they ship, writing good docs, etc. means you’re getting something a lot more like a “big OEM” experience from the hardware vendor, even if you’re ordering a handful of parts.
The generic AliExpress vendors, in my experience, do not do most of those things. They all support Arduino and/or PlatformIO, and sometimes a “native” SDK like mbed, but you’re often on your own figuring out how to integrate that bare MCU with other devices you need for a complete solution. Docs are often incomplete or untranslated, and it can be hard to know exactly which chip (or associated components like onboard sensors and BME) is on there. It can change between board revisions, or even identically-named parts from different vendors.
There are other players like M5 and RAK who make nice modular platform as well, but their prices tend to be up there with AF and SF.
In ~25 years or so of dealing with large, existing codebases, I've seen time and time again that there's a ton of business value and domain knowledge locked up inside all of that "messy" code. Weird edge cases that weren't well covered in the design, defensive checks and data validations, bolted-on extensions and integrations, etc., etc.
"Just rewrite it" is usually -- not always, but _usually_ -- a sure path to a long, painful migration that usually ends up not quite reproducing the old features/capabilities and adding new bugs and edge cases along the way.
> With a sufficient number of users of an API,
it does not matter what you promise in the contract:
all observable behaviors of your system
will be depended on by somebody.
An LLM rewriting a codebase from scratch is only as good as the spec. If “all observable behaviors” are fair game, the LLM is not going to know which of those behaviors are important.
Furthermore, Spolsky talks about how to do incremental rewrites of legacy code in his post. I’ve done many of these and I expect LLMs will make the next one much easier.
>An LLM rewriting a codebase from scratch is only as good as the spec. If “all observable behaviors” are fair game, the LLM is not going to know which of those behaviors are important.
I've been using LLMs to write docs and specs and they are very very good at it.
That’s a fair point — I agree that LLMs do a good job predicting the documentation that might accompany some code. I feel relieved when I can rely on the LLM to write docs that I only need to edit and review.
But I’m using LLMs regularly and I feel pretty effectively — including Opus 4.5 — and these “they can rewrite your entire codebase” assertions just seem crazy incongruous with my lived experience guiding LLMs to write even individual features bug-free.
When an LLM can rewrite it in 24 hours and fill the missing parts in minutes that argument is hard to defend.
I can vibe code what a dev shop would charge 500k to build and I can solo it in 1-2 weeks. This is the reality today. The code will pass quality checks, the code doesn’t need to be perfect, it doesn’t need to be cleaver it needs to be.
It’s not difficult to see this right? If an LLM can write English it can write Chinese or python.
Then it can run itself, review itself and fix itself.
The cat is out of bag, what it will do to the economy… I don’t see anything positive for regular people. Write some code has turned into prompt some LLM. My phone can outplay the best chess player in the world, are you telling me you think that whatever unbound model anthropic has sitting in their data center can’t out code you?
What mainstream software product do I use on a day to day basis besides Claude?
The ones that continue to survive all build around a platform of services, MSO, Adobe, etc.
Most enterprise product offerings, platform solutions, proprietary data access, proprietary / well accepted implementation. But lets not confuse it with the ability to clone it, it doesnt seem far fetched to get 10 people together and vibe out a full slack replacement in a few weeks.
If an LLM wrote the whole project last week and it already requires a full rewrite, what makes you think that the quality of that rewrite will be significantly higher, and that it will address all of the issues? Sure, it's all probabilistic so there's probably a nonzero chance for it to stumble into something where all the moving parts are moving correctly, but to me it feels like with our current tech, these odds continue shrinking as you toss on more requirements and features, like any mature project. It's like really early LLMs where if they just couldn't parse what you wanted, past a certain point you could've regenerated the output a million times and nothing would change.
That CPU is pretty much a toy compared to (say) a brand-new M5 or EPYC chip, but it similarly eclipses almost any MCU you can buy.
Even with fast AES acceleration on the CPU/MCU — which I think some Cortex MCUs have — you’re really going to struggles to get much over 100Mbits of encrypted traffic handling, and that’s before the I/O handling interrupts take over the whole chip to shuttle packets on and off the wire.
Modern crypto is cheap for what you get, but it’s still a lot of extra math in the mix when you’re trying to pump bytes in and out of a constrained device.
You're looking at the wrong thing, WireGuard doesn't use AES, it uses ChaCha20. AES is really, really painful to implement securely in software only, and the result performs poorly. But ChaCha only uses addition rotation and XOR with 32 bit numbers and that makes it pretty performant even on fairly computationally limited devices.
For reference, I have an implementation of ChaCha20 running on the RP2350 at 100MBit/s on a single core at 150Mhz (910/64 = ~14.22 cycles per bytes). That's a lot for a cheap microcontroller costing around 1.5 bucks total. And that's not even taking into account using the other core the RP2350 has, or overclocking (runs fine at 300Mhz also at double the speed).
You’re totally right; I got myself spun around thinking AES instead of of ChaCha because the product I work on (ZeroTier) started with the initially and moved to AES later. I honestly just plain forgot that WireGuard hadn’t followed the same path.
An embarrassing slip, TBH. I’m gonna blame pre-holiday brain fog.
We host a fair bit of Terraform code in a repos on GitHub, including the project that bootstraps and manages our GH org’s config: permissions, repos, etc.
Hilariously, the official Terraform provider for GitHub is full of N+1 API call patterns — aka exponential scaling hotspots — so even generating a plan requires making a separate (remote, rate-limited) API call to check things like the branch protection status of every “main” branch, every action and PR policy, etc. As of today it takes roughly 30 minutes to do a full plan, which has to run as part of CI to make sure the pushed TF code is valid.
With this change, we’ll be paying once to host our projects and again for the privilege of running our own code on our own machines when we push changes…and the bill will continue to grow exponentially b/c the speed of their API serves to set an artificial lower bound on the runtime of our basic tests.
(To be fair, “slow” and “Terraform” often show up and leave parties at suspiciously similar times, and GitHub is far from the only SaaS vendor whose revenue goes up when their systems get slower.)
I’m a huge Framework fan: preordered the 13 and Desktop, have done mainboard + LCD upgrades on personal and work machines, etc. Likewise, I’ve used ARM machines as general-purpose Linux workstations, starting with the PineBook Pro up to my current Radxa Orion. It seems like a great combo!
Unfortunately, firmware and OS support are hard for any vendor, especially one as small (compared to, say, Lenovo or HP) and fast-moving as Framework. Spreading that to yet another ISA and driver ecosystem seems like it would drag down quality and pace of updates on every other system, which IMHO would be a bad trade.
I bought the InkPalm a while back, and the Palma recently. (Note: I have a "thing" when it comes to e-Ink devices and so tend to just pull the trigger on new interesting ones when they come out.)
They're superficially similar, but the InkPalm feels like a very limited device by comparison. No Play Store, slower refresh, much older Android build, and a mostly-Chinese localized UI with partial English translation.
If you really just need a basic e-reader that can handle MOBI and ePub files and are willing to put up with a somewhat frustrating experience, the InkPalm is fine. OTOH, if you spend a lot of time reading long-form text but also want to occasionally run other apps -- Termux in particular is a pretty great tool to have on a small e-Ink device when paired with a small BT keyboard -- the Palma is meaningfully better.
Go has a lot of strengths, but embedding performance-critical code as a shared library in a mobile app isn't among them.