One day I will give a lighting talk about the load bearing teapot, or how and why I made HTTP Status 418 a load bearing part of an internal API, and why it was the least bad option considering the constraints.
Google’s spiders will punish you for giving them too many 429 responses. It’s hell for hosting sites with vanity urls. They can’t tell they’re sending you 50+ req/s.
It’s practically a protection racket. Only AWS gets the money.
> I've had to do some ridiculous things to get them to behave after installing Linux, like tricking the BIOS to deal with UEFI correctly
I would suggest going for a couple of generations newer - the M92p is from an era before UEFI became really stable. For automated testing of my startup's product we have a testlab of tens of older USFF desktops and the M700/M900/M910 machines are some of my favorites. They're also just before the cut-off for Windows 11 support so they're still available dirt cheap.
Two things to watch out for - the M700 lacks a PCI-E M.2 slot - the internal M.2 slot supports only SATA M.2 drives. Second, the front USB ports failing is a really common failure mode.
Ooo that's _gotta_ be what it is. Just the most bizarre UEFI issues. I luckily found an incantation that works in a pretty general way for M92ps, but had I not I'd have some bricks laying around.
I have some M910q that I am very happy with. UEFI is well supported, I was able to upgrade them to 32gb of RAM, i7 7700t and both a 512gb SSD and NVMe for mirrored storage. Highly recommended. Sure, it would be nice to get something newer than 7th gen, but it's still highly capable, small, quiet and fairly low power usage.
I hope we get a mac client for this at some point - travelling with two laptops sucks, so bringing a NUC I could remote into with good performance would be a huge win.
Yes - but as this is a purpose-designed protocol for display transport over thunderbolt, I would expect it to perform better than a remote desktop solution intended to go over a potentially low-bandwidth network.
In the past I've found that using RDP to a VM running on localhost can actually perform better than the console provided by the VMM, but it's still not close to the experience of using the OS natively. I would expect this to be a lot closer.
Is it though? TB has native support for DisplayPort streams, but the article doesn't reference DisplayPort at all, and does make several other references:
> A decade ago, Intel showed off something very similar: a 10Gbit Ethernet-over-Thunderbolt demonstration called Thunderbolt Networking. This is a faster version, an Intel representative said via email. Thunderbolt Share uses up to a 20Gbps connection over Thunderbolt 4 with low latency
> Essentially, you’re performing a local, cabled version of Microsoft’s Remote Desktop without all of the setup.
I don't think there's any reason yet to believe that this isn't just a proprietary (or possibly even just embedded VNC) screen sharing solution that's tweaked for a high-bandwidth low latency connection.
macOS has a "high performance mode" for its regular screen sharing (as of v14, Sonoma) which works amazingly well even over wired Gigabit ethernet.
In this it's claimed that Intel is doing a direct framebuffer copy. I'd say the "Microsoft’s Remote Desktop without all of the setup." is editorialising.
It's not the clearest shot, but the latency shown at 30s in that video looks pretty good to my eyes.
On the other hand I've been caught out by tech companies making exaggerated claims about pre-release products before, so who knows, maybe it actually is no better than VNC.
A hill I will die on is that tech products should just stop bundling cables, for anything, with the possible exception of unit-specific power adapters. A while back I purchased a KVM switch - it came with 3 DP cables, which went straight into my e-waste box. I've also seen office fit-outs where mountains of cables that came with monitors went straight from factory to landfill because they were the wrong length.
I understand some of the reason it happens - it's not a great experience to buy a product and then be unable to start using it immediately because you don't have the right cables. And there are a lot of low-quality cables out there which might have the right connectors but not actually work - I bought at least 3 different 5m DP cables before I found one that reliably worked at 4K. But surely that can't justify the literal mountains of e-waste the practice creates.
Sadly I don't think it'll ever change without regulation.
> possible exception of unit-specific power adapters
No. Unit-specific power adapters should not exist. Either put a USB-C or a 120v/240v AC connector on the device, depending on power requirements. It's really not that hard.
Note: it must be connector, not a fixed cable. I.e. an IEC C8 or C14.
I agree in principle, but I think there has to be some room for exceptions here. Some portable devices like smartwatches are too space constrained for USB-C and some devices might use too much power for USB-PD but still be too small to include the power supply internally. Also, some of my synth gear uses a locking barrel connector, which I think is a better trade-off than a locking USB-C connector because it can be locked and unlocked faster.
Bundled power bricks are also much less likely to directly be e-wasted without being used.
This might apply to (most) IT devices. But there are devices that require 24 V, or 48 V, or any other voltage that USB-C can't supply, and that for various reasons (space, EMI, possibly even compliance with some safety regulation) can't contain an integrated power supply unit from 120/230 V mains. Of course this should be an exception and most consumer devices can definitely work with the regular voltages and currents that USB-C can supply.
It's a bit silly IMO, but USB PD EPR _can_ support 24v and 48v- for charging laptops, I believe. The day I see a server rack with a pair of USBCs plugged into it is a far day off I hope though.
Didn't know, thanks for the info... I still have to see a power supply capable of delivering those voltages on USB-C though. All the ones I've seen can output 5V, 12V and 19V.
> But there are devices that require 24 V, or 48 V, or any other voltage that USB-C can't supply, and that for various reasons (space, EMI, possibly even compliance with some safety regulation) can't contain an integrated power supply unit from 120/230 V mains.
Totally agree. Using something like USB-PD that has voltage negotiation on the wire might increase the price of some appliances (especially those that have just minimal electronics inside), but a standard connector (like just a barrel jack?) would already be nice thing!
A really bad one now is devices providing crappy power only usb micro cables, very often these will still have the 4 pin head. I've started instantly binning them to avoid situations where I need to transfer data and can only find these ones lying around
Yes, and I think they did that for relatively cynical reasons -- as a handout to the Best Buys of the world who would then be able to attach a 90% margin "Printer cable" at $29.95 to your $50 Black Friday special inkjet.
I suspect the reason why this didn't go on to become standard across all classes of device, is because since 2010 or so, the average or median margin on accessories has cratered thanks to Amazon Marketplace sellers. You could realistically end up needing to buy a Belkin $30 printer cable in 2005, unless you'd heard of Monoprice. Today by contrast if you just search Amazon for it, you'll have one for $4.94-$6.49 delivered within 2 days. If margins were still what they used to be on cables and stuff, I think you'd have a strong incentive for places like Amazon and Walmart to pressure suppliers to make cables a la carte (officially for environmental reasons, but also, for great profits, lol)
You won't die alone on that hill. I think it's a great thing that many phones no longer ship with chargers. The mild inconvenience of having to buy a separate charger should not outweigh the reduction amount of waste we produce with new chargers.
Brazil has made it illegal to sell a phone without a charger which IMO is a total step backwards. If anything, it should be illegal to not give the option to unbundle cables from the package.
Embedding the Starlark interpreter into a Rust program took me less than an hour. There's little more to it than adding the crate and calling into it. No futzing with the build process.
If starlark does everything you need (and especially if its limitations are desirable for your use case) then it's the clear choice in my view.
Measured by LOC, a lot of code in systems i've worked on is just copying data from one type of object to another. One frustrating bug I've dealt with was due to someone copying the wrong value between two similarly-named fields, but the request went through so many layers of the system before it was processed by the buggy code that it took hours to track down.
I've spent a lot of time thinking about how to write less of this code, and I think what I want is something similar to Postgrest, but with a mechanism for some sort of hook, where I can write some code to manipulate the request/data in a type-safe way.
The closest I've seen to this was early in my career - one of my first jobs was working at a WebObjects consultancy. Because WebObjects provided the full stack from HTML templating engine to ORM - and by that time also community-driven frontend libraries - you had to write very little of this type of code.
I suspect also that some of the resistance to the Postgrest-style approach in enterprise environments comes from tighter controls around data, and requirements/expectations for stricter change control around databases. Buggy code can always be rolled back, but a botched database change could be a much bigger problem. (Of course, the fact that buggy code could corrupt or delete data almost as easily is ignored in this calculation). I still remember weeks of meetings at one employer trying to get a column added with the ultimate answer being 'no'.
Indeed, Google and Amazon have already shown an interest in designing their own silicon (TPU and Graviton). Probably Microsoft too, but I'm not aware of an example off the top of my head.
Upstarts are not easily going to be able to pursue that, so NVidia has a strong interest in supporting them.
Some tools - even cross-platform ones - will create CRLF files on Windows by default (looking at you JetBrains) so when working on Windows I usually like to have autocrlf set to 'input' to avoid accidentally committing CRLF files into the repository.
Respecting platform conventions is the sensible default, particularly for a platform that represents over 80% of desktop marketshare.
This problem used to be a whole lot worse under pre-OS X versions of macOS that used CR (just CR, no LF) as the line separator. At least now there are only 2 commonly used conventions and you can essentially just ignore any CRs you encounter in most cases.
> This problem used to be a whole lot worse under pre-OS X versions of macOS that used CR (just CR, no LF) as the line separator.
That bare CR newline convention used to be very widespread – not just Classic MacOS, also many 8-bit micros (Commodore, Acorn, Apple II, TRS-80, ZX Spectrum, HP-85), Oberon, MIT Lisp Machines and Microware OS-9. But, by the late 1990s, Classic MacOS was the only one of those systems with any mainstream significance. And now you'll only encounter bare CR newlines in retrocomputing or obscure legacy systems – oh, and raw mode terminal input.
> Respecting platform conventions is the sensible default
That's fair, but I'd argue an equally sensible default would be to respect the conventions of the language ecosystems you're working within, and for Java, Python, Rust, that's LF. IntelliJ provides a lot of configuration on a per-language basis but line separators for new files is a global setting for some reason.
I have a few toy programming languages I created as a hobby / learning exercise. (Most of them I’ve never released, maybe some day.)
In one of them, I decided to make carriage returns a lexical error. In fact, the only C0 control it allows in source text is LF, it doesn’t even allow tabs (my personal answer to the perennial spaces-vs-tabs debate).
Huge Alpine fanboy myself. I love that a clean install has less than ten processes and I know what they all do. The community is also very good at building packages that "just work" - the article points out ZFS, but also docker, podman, libvirt are trivially installable.
> Perhaps the package I was most surprised about was zfs. ... What that would look like after an upgrade I’d have to see, but thus far I’m impressed.
I can confirm it works smoothly. I've observed that the ZFS package is updated whenever the kernel is updated.