Somewhat related - I want control over devices in my home. Too many things these days need an internet connection to be useful. I run my own OpenWRT router and set up firewall policies for them so they only get the access they need to provide their function. But I'm getting tired of it.
I'm looking for a nice tool that would give me that "control" over my home network -- at the very least, proper observability. Like "little snitch / open snitch" but running on my home router... and I haven't found anything like that yet.
The word processors of 30 years ago often had limits like “50k chapters” and required “master documents” for anything larger. Lotus 123 had much fewer columns or rows than modern excel.
Not an excuse, of course, but the older tools are not usable anymore if you have modern expectations.
Python’s dicts for many years did not return keys in insertion order (since Tim Peters improved the hash in iirc 1.5 until Raymond Hettinger improved it further in iirc 3.6).
After the 3.6 changed, they were returned in order. And people started relying on that - so at a later stage, this became part of the spec.
I'd personally declare dead everything except 3 and 4 because, unlike the rest, polymorphism is genuinely useful (e.g. Rust traits, Kotlin interfaces)
trivia: Kotlin interfaces were initially called "traits", but with Kotlin M12 release (2015), they renamed it to interfaces because Kotlin traits basically are Java interfaces. [0]
1 is about encapsulation, that makes it really easy to unit test stuff. Say you need to access a file or a database in your test, you could write an abstraction on top of file or db access and mock that.
2 indeed never made sense to me since once everything is ASM "protected" means nothing, and if you can get a pointer to the right offset you can read "passwords". This claim of enforcing what can and cannot be reached from a subclass to help security never made sense to me.
3 i never liked function overloading, prefer optional arguments with default values.. if you need a function to work with multiple types of one parameter, make it a template and constrain what types can be passed
7 interfaces are a must have for when you want to add tests to a bunch of code that has no tests.
8 rust macros do this, and it's a great way to add functionality to your types without much hassle
Indeed. But ... do not confuse your model with reality.
There's a folk story - I don't remember where I read it - about a genealogy database that made it impossible to e.g. have someone be both the father and the grandfather of the same person. Which worked well until they had to put in details about a person who had fathered a child with his own daughter - and was thus both the father and the grandfather of that child. (Sad as it might be, it is something that can, in fact, happen in reality, and unfortunately does).
While that was probably just database constraints of some sort which could easily be relaxed, and not strictly "unrepresentable" like in the example in the article - it is easy to paint yourself into a corner by making a possible state of the world, which your mental model dims impossible, unrepresentable.
Your example doesn’t validate your point. That’s a valid state made unrepresentable, not an invalid state made unrepresentable. Your example simply demonstrates a poorly architected set of constraints.
The critical thing with state and constraints is knowing at what level the constraint should be. This is what trips up most people, especially when designing relational database schemas.
Any assumption made in order to ship a product on time will eventually be found to have been incorrect and will cause 10x the cost that it would have taken to properly design the thing in the first place. The problem is that if you do that proper design you never survive to the stage where have that problem.
I think the solution to that is to continuously refactor, and to spell out very clearly what your assumptions are when you are writing the code (which is an excellent use for comments).
Continuous refactoring is much easier with well constrained data/type schemas. There are fewer edge cases to consider, which means any refactoring or data migration processes are simpler.
The trick is to make the schema represent what you need - right now - and no more. Which is the point of the “Make your invalid states unrepresentable” comment.
I do see how it does, in a way. That something the designer thought is "invalid state" turns out a valid and possible state in real world. In terms or UI/UX, it's the uncomfortable long pause before something happens and screen renders (lack of feedback, feeling that system hangs). Or, content flicker when window is resized or dragged. Just because somebody thought "oh, this clearly is invalid state and can be ignored".
The real world and user experience requirements have a way of intruding on these underspecified models of how the world "should" be.
That’s still a poorly designed system. For UI there should be a ‘view model’ that augments your model, that view model should be able to represent every state your UI can be in, which includes any ‘intermediate’ states. If you don’t do this with a concrete and well constrained model then you’re still doing it, but with arbitrary UI logic, and other ad-hoc state that is much harder to understand and manage.
Ultimately you need to make your own pragmatic decisions about where you think that state should be and how it should be managed. But the ad-hoc approach is more open to inconsistencies and therefore bugs.
Hi, can you give an example? Not sure I understand what you're getting at there.
(My tuppence: "the map is not the territory", "untruths programmers believe about...", "Those drawn with a very fine camel's hair brush", etc etc.
All models are wrong, and that's inevitable/fine, as long as the model can be altered without pain. Focus on ease of improving the model (eg can we do rollbacks?) is more valuable than getting the model "right").
> Hi, can you give an example? Not sure I understand what you're getting at there.
An utterly trivial example is constraining the day-field in a date structure. If your constraint is at the level of the field then it can’t make a decision as to whether 31 is a good day-value or not, but if the constraint is at the record-structure level then it can use the month-value in its predicate and that allows us to constrain the data correctly.
When it comes to schema design it always helps to think about how to ‘step up’ to see if there’s a way of representing a constraint that seems impossible at ‘smaller’ schema units.
The Amiga 500 had high res graphics (or high color graphics … but not on the same scanline), multitasking, 15 bit sound (with a lot of work - the hardware had 4 channels of 8 bit DACs but a 6-bit volume, so …)
In 1985, and with 512K of RAM. It was very usable for work.
For OCS/ECS hardware 2bit HiRes - 640x256 or 640x200 depending on region - was default resolution for OS, and you could add interlacing or up color depth to 3 and 4 bit at cost of response lag; starting with OS2.0 the resolution setting was basically limited by chip memory and what your output device could actually display. I got my 1200 to display crisp 1440x550 on my LCD by just sliding screen parameters to max on default display driver.
Games used either 320h or 640h resolutions, 4 bit or fake 5 bit known as HalfBrite, because it was basically 4 bit with the other 16 colors being same but half brightness. The fabled 12-bit HAM mode was also used, even in some games, even for interactive content, but it wasn't too often.
Sorry I'm not exactly sure what you're saying. I know very well how it works as I write a lot of demos and games (still today) for mode 13h (see https://www.pouet.net/groups.php?which=1217&order=release) and I can program the VGA DAC palette in my sleep. Were you referring to the fact that you write 8-bits to the palette registers? That's true, you do, but only 6-bits is actually used so it effectively wraps around at 64. There are 6-bits per colour component which as you pointed out is 18-bits colour depth.
Btw I was a teenager when those Denthor trainers came out and I read them all, I loved them! They taught me a lot!
Do note that unlike Python’s “import * from a; import * from b” where you have no idea where a name cam from later in the code (and e.g. changes to a and b, such as new versions, will change where a name comes from), Nim requires a name to be unambiguous, so that if “b” added a function that previously only “a” had, you’ll get a compile time error.
The alternative is checking the result of every operation; or use “signaling NaNs” that raise an exception on a (properly configured) scalar operation on a CPU. As soon as non scalar code is involved - SIMD or GPU, quiet NaNs with strategically placed explicit tests along the computation becomes the only reasonable/efficient option.
I'm looking for a nice tool that would give me that "control" over my home network -- at the very least, proper observability. Like "little snitch / open snitch" but running on my home router... and I haven't found anything like that yet.
reply