Hacker Newsnew | past | comments | ask | show | jobs | submit | hakfoo's commentslogin

That's what sort of confuses me.

If you're no longer running a 680x0 and the custom chips that defined what an (original-series) Amiga was, and you can't plug in any of the historic peripherals (I don't even see a 9-pin joystick/mouse socket!) why bother with PowerPC in $current_year?

I can sort of see the story for projects like the "Denise" board where it's basically a way to create a new hardware 68k Amiga (although modern replacements for the Commodore silicon might be desirable, so we aren't just desoldering/desocketing the same 30-year-old chips again and again).

But if you've already given up the main aspects of classic Amiga hardware and chosen emulation as the road forward, cheapest commodity x86-64 or ARM products would be fast enough to emulate pretty much any mainstream 680x0 option and the custom chips. I could see a small niche for a PPC coprocessor accelerator for the small sliver of "PPC-native" software if the current emulation isn't fast enough.


I'm not using it for travel, but I got a GL-BE3600 recently and it's surprisingly decent as a home router for my very specific needs.

I wired the desktop PCs in the house, so the only Wi-Fi users are mobiles, a smart TV, and a laptop. Everything else is already hanging off 2.5G wired switches. Pretty light duty, and I just wanted something that would provide robust routing and placeholder Wi-Fi. This does exactly that, and since it's OpenWRT based, it's probably marginally less terrible than whatever TP-Link was offering in the same price range.

It does run annoyingly hot, but I should just buy a little USB desk fan and point it at the router :P


I've had very impressive success running upstream OpenWRT on TP-Link hardware: I have Archer C7 access points running with literally years of uptime.

That being said, for any new application, I suggest using at least an 802.11ax AP, because cheap 2.4GHz devices that support 802.11ax are becoming common and using an 802.11ac router means that your 2.4GHz devices will be stuck with 802.11n, which is quite a bit less efficient. Even if you don't need any appreciable speed, it's preferable to use a more efficient protocol that uses less airtime.


Ditto, the TP-Link's Archer A7 firmware is a security nightmare [1] but with DD-WRT installed it is very stable and reliable.

[1] Daughter invited ~10 classmates to prepare for a science competition, and one of them had a virus (I assume) that hacked TP-Link's firmware to draft it into a botnet. WAN connection would drop every hour for a few minutes, plus unexplained internet traffic while nobody was using it. Resetting firmware did not help, installing DD-WRT fixed it once and for all.


I think I actually retired an Archer C7 for this. The goal was something 2.5G ready because the city has systematically rolled out fibre to every neghbourhood around here and I'm just waiting for the knock.

Honestly if you're not invested in maybe Ruckus or Aruba, I don't think there's much better than OpenWRT on a decently supported AP. I had a bunch of the C7s with OpenWRT and they've been totally bulletproof. I only upgraded to R650s recently and it's not clear beyond maybe the antenna setup and the fact that it's ax now that it's much better.

I have the same router as the OP article - it ran at 72C until I did [this](https://phasefactor.dev/2024/01/15/glinet-fan.html#choosing-...). Currently running at 60C!

1:76 is a very popular scale for model railways in the UK, so I wonder if they design figures for that audience, then paint them in fatigues to expand the market range.

The US preferred 1:87 historically. English 1:76 and American 1:87 trains run on the same size track, but the English models are typically built slightly over-size because their smaller bodies wouldn't fit a good motor easily.


To me, the big aesthetic of early Qt/KDE1 is "Obvious Motif ripoff". Aside from the Win95/Warp style titlebars, if you don't have the big thick bevels and the distinct scroll bars, it's not quite right.

It really galls me that they removed the Motif style in Qt6, since I target that as my default look and feel. It gives a nice "This is expensive professional software with a codebase tracing back to the Reagan administration" vibe.

There are themes that come close in various attempts-- "Commonality" for Qt6/Kvantum, and some of the assets from NsCDE for GTK, but it feels like a pitched battle against design teams that desperately want to mimick whatever Apple is doing this week.

https://imgur.com/a/MWiFhkH


> To me, the big aesthetic of early Qt/KDE1 is "Obvious Motif ripoff". Aside from the Win95/Warp style titlebars, if you don't have the big thick bevels and the distinct scroll bars, it's not quite right.

This is a KDE1 screenshot: https://diit.cz/sites/default/files/kde1-snapshot01.png

Is this really a Motif ripoff? For me it's much more like Win95 with a better titlebar in the window decoration. To each their own, but it seems quite right to me.


https://en.wikipedia.org/wiki/K_Desktop_Environment_1#/media... Evidently they offered both the Win9x and Motif-like themes back then too, I guess I never bothered with it.

I can recall trying Beta 4 and probably 1.0, but at the time it felt like a weird situation. It wasn't quite everything you needed, and a lot of the apps were still obviously sort of immature. The HTML-driven file manager was interesting (ISTR OS/2 offered a similar way to customize things on a per-directory level) but it seemed like a lot of resources when a 486/80 with an obscene 32Mb of memory was my Linux machine.


Of course you thought that was a Motif ripoff. The screenshot you reference is from an alpha pre-release version. KDE1 had neither the widget style nor the window decorations of that screenshot.

Also 20% for a big heavy transformer.

Sometimes I see cheap "amplifier only" designs that are about the size of a small 2U rackmount, but then you usually give up a lot of inputs and controls; they seem to be used either as PA amplifiers or as "extra room" units in the weird whole-house audio systems that apparently thousands of people had at one point and all dumped in the Goodwill.


Why don't they have at least a receive-only radio? I can understand if they're averse to someone keying up and accidentally broadcasting Secret Military Stuff on the civilian frequency, but a an air-band capable VHF receiver is less than $100 as a consumer buying single units. Surely the MIC could find a way to add one for just $10k as cheap insurance against losing a $5 million plane in a tragic and avoidable accident?

For some reason I had a much easier time getting OpenBSD working on one specific laptop (a Thinkpad E585 where I had replaced the stock Wifi with an Intel card). A lot of Linux distributions got into weird states where they forgot where the SSD was, and there was chicken-and-egg about Wifi firmware.

OpenBSD at least booted far enough that I could shim the Wifi firmware in as needed. I probably picked the wrong Linux distribution to work with, since I've had okay luck with Debian and then Devuan on that machine's replacement (a L13)


probably because OpenBSD developers use laptops, so they port the OS to laptops all the time.

FreeBSD has a few laptop developers, but most are doing server work. There is a project currently underway to help get more laptops back into support again: https://github.com/FreeBSDFoundation/proj-laptop


For my two cents, it discourages standardization.

If you run bare-metal, and instructions to build a project say "you need to install libfoo-dev, libbar-dev, libbaz-dev", you're still sourcing it from your known supply chain, with its known lifecycles and processes. If there's a CVE in libbaz, you'll likely get the patch and news from the same mailing lists you got your kernel and Apache updates from.

Conversely, if you pull in a ready-made Docker container, it might be running an entire Alpine or Ubuntu distribution atop your preferred Debian or FreeBSD. Any process you had to keep those packages up to date and monitor vulnerabilities now has to be extended to cover additional distributions.


You said it better at first: Standardization.

Posix is the standard.

Docker is a tool on top of that layer. Absolutely nothing wrong with it!

But you need to document towards the lower layers. What libraries are used and how they're interconnected.

Posix gives you that common ground.

I will never ask for people not to supply Docker files. But to be it feels the same if a project just released an apt package and nothing else.

The manual steps need to be documented. Not for regular users but for those porting to other systems.

I do not like black boxes.


Why I move from docker for selfhosted stuff was the lack of documentation and very complicated dockerfiles with various shell scripts services config. Sometimes it feels like reading autoconf generated files. I much prefer to learn whatever packaging method of the OS and build the thing myself.


Something like harbor easily integrates to serve as both a pull-through cache, and a cve scanner. You can actually limit allowing pulls with X type or CVSS rating.

You /should/ be scanning your containers just like you /should/ be scanning the rest of your platform surface.


I always saw it as two different mindsets for data storage.

One vision is "medium-centric". You might want paths to always be consistently relative to a specific floppy disc regardless of what drive it's in, or a specific Seagate Barracuda no matter which SATA socket it was wired to.

Conversely it might make more sense to think about things in a "slot-centric" manner. The left hand floppy is drive A no matter what's in it. The third SATA socket is /dev/sdc regardless of how many drives you connected and in what order.

Either works as long as it's consistent. Every so often my secondary SSD swaps between /dev/nvme0 and /dev/nvme1 and it's annoying.


> One vision is "medium-centric". You might want paths to always be consistently relative to a specific floppy disc regardless of what drive it's in, or a specific Seagate Barracuda no matter which SATA socket it was wired to.

> Conversely it might make more sense to think about things in a "slot-centric" manner. The left hand floppy is drive A no matter what's in it. The third SATA socket is /dev/sdc regardless of how many drives you connected and in what order.

A third way, which I believe is what most users actually want, is a "controller-centric" view, with the caveat that most "removable media" we have nowadays has its own built-in controller. The left hand floppy is drive A no matter what's in it, the top CD-ROM drive is drive D no matter what's in it, but the removable Seagate Expansion USB drive containing all your porn is drive X no matter which USB port you plugged it in, because the controller resides together with the media in the same portable plastic enclosure. That's also the case for SCSI, SATA, or even old-school IDE HDDs; you'd have to go back to pre-IDE drives to find one where the controller is separate from the media. With tape, CD/DVD/BD, and floppy, the controller is always separate from the media.


AmigaOS supported both. Each drive and in addition each medium had it's own name. If GAMEDISK was in floppy 0, you could reference it either as DF0: or as GAMEDISK:

You could even reference media that was not loaded at the time (e.g. GAMEDISK2:) and the OS would ask you to insert it into any drive. And there were "virtual" devices (assigns) that could point to a specific directory on a specific device, like LIBRARIES:


And the sad thing is that stuff directly in `/dev` isn't neither, it's just "first come first served" order, that is more or less guaranteed to be non-deterministic BS. One is supposed to use udev /dev/disk/by-path/ subtree if one really wants "slot-centric" connections.


Wayland had the same odor a failed state has.

It's a huge hairball with no easy fixes, but at the same time, that's of significant benefit to some specific players. You can have a very usable X11 desktop with positively pre-Cambrian software. But to keep up with Wayland's ever evolving omnishambles, you basically have to run KDE or GNOME, or maybe Sway.


That is not what the protocol is like at all. Wayland can't do certain things not because they are difficult to do, but because Wayland was initially pathologically minimalist (I say that as somebody who generally likes minimalism) and the approval process for extensions has only improved slowly over time. It turns out that there's much more than blitting rectangles to a (single) screen and routing input events to a desktop environment.

If all the expected stuff was there, Sway (chosen because it's not tied to a desktop environmen) could be your X server equivalent.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: