ExplainingComputers with Christopher Barnatt is a regular online series that tickles that right spot for me. He's got the professorial yet whimsical energy of an 80s BBC presenter, making the show feel a bit like The Computer Programme or Beyond 2000.
I've seen a couple of apps try to use Play Integrity, get blocked by GrapheneOS, and keep on running. Maybe I'm being locked out of something, but it's not something I use anyway.
Note that I don't use banking or government apps. If I bank online it's via the web.
It does seem like a lot of apps continue to function on GrapheneOS after the "Play Integrity" check fails (or at least after Graphene notifies the user that the Play Integrity API has been called). I suspect either:
A) These apps have implemented only the check so far, and will eventually refuse to run or limit functionality at some point in the future.
B) These apps have noted the failure and certain functionality, especially communicating with servers to load "protected" content, will fail even if the app otherwise continues to run.
One of the reasons I left a senior management position at my previous 500-person shop was that this was being done, but not even accurately. Copilot usage via the IDE wasn't being tracked; just the various other usage paths.
It doesn't take long for shitty small companies to copy the shitty policies and procedures of successful big companies. It seems even intelligent executives can't get correlation and causation right.
OpenBSD—all the BSDs really—have an even more unstable ABI than Linux. The syscall interface, in particular, is subject to change at any time. Statically linked binaries for one Linux version will generally Just Work with any subsequent version; this is not the case for BSD!
There's a lot to like about BSD, and many reasons to prefer OpenBSD to Linux, but ABI backward-compatibility is not one of them!
One of Linux's main problems is that it's difficult to supply and link versions of library dependencies local to a program. Janky workarounds such as containerization, AppImage, etc. have been developed to combat this. But in the Windows world, applications literally ship, and link against, the libc they were built with (msvcrt, now ucrt I guess).
It would not solve the ABI problem, but it would give at least an opinionated end to end API that was at some point the official API of an OS. It has some praise on its design too.
It was more about everything since the Amiga being a regression. BeOS was sometimes called a successor (in spirit) to the Amiga : a fun, snappy, single-user OS.
I regularly install HaikuOS in a VM to test it and I think I could probably use it as a daily driver, but ported software often does not feel completely right.
The future of software development is systems analysts.
It was discovered in the 1970s (at the latest) that the hard part of software is figuring out WHAT to build not HOW to build it—and the two should be separate responsibilities with separate personnel bearing separate talents. (Do NOT let your programmers do systems analysis!)
Furthermore, it was discovered that without a clear, precise description of what to build, even the most talented programmers will happily use industry best practice to build the wrong thing, and that is worse than useless: it's a net cost to the business. This is something that programmers have in common with LLMs. So it turns out, in order to build software correctly and cost-effectively, you need to perform lots of systems analysis up front, then through a stepwise refinement process design and document your solution, yielding a detailed spec. It also turns out that LLMs, like human programmers, do just fine if you give them a detailed spec and hold them accountable to that spec!
So much of the "grunt work" of programming can be handed to the LLM—IF you perform the necessary systems analysis up front to determine what needs to be built! (Bryce's Law: Programming is a translation function, taking human-readable requirements to machine-executable instructions.)
As the AI era matures we are either going to see a revival of PRIDE—the original and still most complete software development methodology, but minus the programmers Milt and Tim Bryce loathed so much—or the entire collapse of software as a field.
> XP putting a customer on the team was the best thing in the methodology.
Recently my boss said to me: "Customers want something that WORKS. If you deliver something, and it doesn't work, what's the customer going to think?" The huge drawback to putting a customer on the team is that the customer probably doesn't want to know, let alone be involved with, how the sausage is made. They want a turnkey solution unveiled to them on the delivery date, all ready to go, with no effort on their part.
Generally what you want is a customer proxy in that role, who knows or can articulate what the customer needs better than the customer themselves can. Steve Jobs was a fantastic example of someone who filled this role.
The job of the big-picture software architect is not to give "generic software design advice". It's precisely to see the big picture: understand the information needs and flows of the business and determine WHAT needs to be built in order to serve those precise needs. Let the programmers worry about the details. That's their job and their strength: they are detailists who are fluent in the language of the machine, but their biggest drawback is, they tend to have difficulty seeing the big picture and understanding how those details fit into a greater whole.
One does not need to be a programmer in order to be a great systems analyst/architect. Matter of fact it's the opposite: great analysts are good with people, and have a strong intuitive grasp of what people need in order to effectively run the business. Leaving that to programmers is a recipe for disaster, as without documentation of existing business systems and requirements and a solid design, programmers will happily build the wrong thing.
https://explainingcomputers.com
https://m.youtube.com/@explainingcomputers
reply