Hacker Newsnew | past | comments | ask | show | jobs | submit | ckocagil's commentslogin

That's also the most tiresome part of driving and has the least risk due to low speeds. Easy win for FSD. But for all other cases it becomes a complicated ethical question.


That's how I always used it. CMake and non-Qt. Very solid IDE.


One only needs to look at the massive island complexes with underground bunkers the billionaires are building for themselves.


Finally there's a new stepping (i.e revision) for the RP2350 die that fixes the E9 bug that has plagued the chip so far, causing excessive input currents on GPIO pins which made internal pull-up resistors useless and made high impedance inputs impossible.

GPIO pins are now all 5V tolerant too!


Instead of a moonshot micro kernel, why didn't Google just build and maintain a new Linux driver API/ABI layer with backwards compatibility and security? Not an easy endeavor, but is it harder than Fuchsia?


Google kind of does this with Android. Most of the magic sauce for a lot of hardware is in user space -

https://siliconsignals.io/blog/implementing-custom-hardware-...

That's why you used to not touch vendor partition when flashing a custom ROM etc..


It is more moonshot to design an API while Linux devs are constantly pulling the rug under.

Microkernels provide nice secure API boundaries and optimizations to reduce performance impact when crossing them on modern CPUs.

The monolithic design forces you to stay in either user or kernel mode as much as possible to not lose performance. Adding the API and ABI incompatibility makes it near impossible to maintain.

It will require a hard fork of Linux, which won't be Linux anymore. Monolithic design is the artifact of low-register-count CPUs of the past. If you are going to create a hard fork of a kernel, why not use a more modern design anyway?


You have to wonder why the Linux devs are "pulling the rug under"


If you get your driver into the mainline kernel, it gets maintained. If your driver is not in mainline, then you have to deal yourself with kernel internals changing.

Hell, a while back one of the kernel devs was actively screaming for hardware manufacturers to reach out to them so folks could work with the manufacturer to get drivers for their products into mainline. There was even a website and nice instructions on what to do and who to contact... but I'll be fucked if I can find it anymore.

There's nothing nefarious going on... it's explicitly stated (and well-known) that the stable interface to Linux is the userspace interface. Inside the kernel, things are subject to change at any time. Don't want to have to work to keep up? Get your driver into mainline!


They want to keep control. Same reason RMS opposed exposing GCC internals for non-free use.


Yes. If you want a nice, secure driver model, a greenfield project is much easier.

Maybe one could run a Fuchsia-like thing inside Linux and use Linux to provide the Linux userland ABI, but that might be challenging to maintain.



Because they get the bonus of no GPL and owning the copyright as a bonus with Fuchsia.


I made my DEs somewhat pretty but never got too deep into ricing. Then I switched to i3-gaps. But I still spent a lot of time configuring and comparing terminal emulators, shell prompts, completion, vim plugins, iommu/vfio...

At one point I got sick of it all and decided to switch back to Windows/MacOS, not configure anything too heavily, install tools when I need them, and use all tools as vanilla as possible. Besides not having to maintain all this stuff I also realized I could then ssh into any machine and not miss my customized tools.


All the "AI software engineer" hype is about one thing: suppressing developer wages. Steve Jobs used to do this covertly, to some extent, with his "no cold call" anti-poaching agreements.

It's a multi-prong attack. One angle is the interest rates and mass layoffs. Another angle is AI. There are more ridiculous ones, such as this "study" that claims 10% of devs do literally no work: https://x.com/yegordb/status/1859290734257635439 - I expect many such studies to be conducted and generously funded by the industry in the coming years.

The big irony is how all this talk will lead to a decrease in the software engineer supply. Why would anyone choose a career that's about to die?


For any consumer use to play back audio I can't image a scenario where TL071 wouldn't be enough. And you'd rarely even need an op-amp when integrated solutions are available.


Potential replacements for ultra low THD would be: OPA1612, OPA1656 and OPA1642 with bipolar, cmos and jfet inputs respectively.


There's a massive (old) list of them in Horowitz & Hill TAOE 3e Table 8.3a p. 522. I'm sure you can just go to Digikey or Octopart and do a parameter search for high-voltage, low-noise, BJT-input op-amps too. If one wanted to "use a Ferrari to go the grocery store", they could always use a $20 AD797 or LT1115 for audio applications. :o)


I picked those particular chips is due to their extremely good performance at a reasonable cost. Check their THD against AD797 and LT1115! :)


Fun fact: a rectangular sheet of resistor with a specific thickness and terminals at both sides has a resistance unit of: ohms per square. Not square meters, just square!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: