Hacker Newsnew | past | comments | ask | show | jobs | submit | JeremyBarbosa's commentslogin

SEEKING WORK | Boston, MA USA | REMOTE | Technical Copywriter

Have a new product launching soon? Or some content you have been meaning to get to?

Hi, I'm Jeremy, an ex-software engineer who writes about tech. After helping build a cloud storage unicorn, I got into teaching developers and found out that I'm pretty good at making complicated stuff make sense. Now I write technical content mainly for SaaS companies. Happy to also chat about open-source, cloud architecture, or where tech content is heading.

If you want to learn more, reach out to me at mail@jeremybarbosa.com

--

My Main Services

- Blog Writing: I turn complex tech topics into articles people actually want to read. Whether it's deep dives for your engineering audience or guides that won't scare off beginners.

- Topic Planning: Not sure what to write about? I look deep into what your users care about and map out content that'll actually help them while helping you grow.

- SEO That Makes Sense: No keyword stuffing here - just strategic writing that helps you show up when people Google their problems.

--

Read more on my website: https://jeremybarbosa.com


SEEKING WORK | Boston, MA USA | REMOTE | Technical Copywriter

Have a new product launching soon? Or some content you have been meaning to get to?

Hi, I'm Jeremy, an ex-software engineer who writes about tech. After helping build a cloud storage unicorn, I got into teaching developers and found out that I'm pretty good at making complicated stuff make sense. Now I write technical content mainly for SaaS companies. Happy to also chat about open-source, cloud architecture, or where tech content is heading.

If you want to learn more, reach out to me at mail@jeremybarbosa.com

--

My Main Services

- Blog Writing: I turn complex tech topics into articles people actually want to read. Whether it's deep dives for your engineering audience or guides that won't scare off beginners.

- Topic Planning: Not sure what to write about? I look deep into what your users care about and map out content that'll actually help them while helping you grow.

- SEO That Makes Sense: No keyword stuffing here - just strategic writing that helps you show up when people Google their problems.

--

Read more on my website: https://jeremybarbosa.com


As someone who has used Bevy in the past, that was my reading as well. It is an incredible tool, but some of the things mentioned in the article like the gnarly function signature and constant migrations are known issues that stop a lot of people from using it. That's not even to mention the strict ECS requirement if your game doesn't work well around it. Here is a good reddit thread I remember reading about some more difficulties other people had with Bevy:

https://old.reddit.com/r/rust_gamedev/comments/13wteyb/is_be...

I wonder how something simpler in the rust world like macroquad[0] would have worked out for them (superpowers from Unity's maturity aside).

[0] https://macroquad.rs/


I was bit confused why this was notable, but the Pixel 9a just released Thursday. So this is an incredibly fast turnaround for a community OS.


I think a lot of the work is in new device bringup, and given that they can start from official Pixel trees, it shouldn't be too much work to adapt them for whatever GrapheneOS-specific build processes they have, and then I'm assuming the rest of the GrapheneOS customizations are framework side which should be device agnostic. I guess they could have kernel changes for hardening, but not sure how easy or hard porting those would be - is the Pixel 9 series on a newer kernel version than say the Pixel 8?


GrapheneOS has various features requiring hardware integration. Our hardening features also uncover bugs across the code especially in drivers, especially the hardware memory tagging integration in our hardened_malloc project and the standard Linux kernel hardware memory tagging support we use in the kernel. Pixels are very similar to each other though so this work is largely but not entirely done for them.

Adding new Pixels including whole new generations tends to be quite easy. When we still supported Snapdragon Pixels in the official GrapheneOS branch, it would have been fairly easy to add support for a theoretical Sony or Motorola device meeting our security requirements (https://grapheneos.org/faq#future-devices). Now that we don't have a Snapdragon device, we'd need to deal with fixing or working around all the bugs uncovered in Snapdragon drivers, integrating support for the USB-C controller into our USB-C port control feature (https://grapheneos.org/features#usb-c-port-and-pogo-pins-con...), adding back our code for coercing Qualcomm XTRA into being privacy respecting, etc. Snapdragon doesn't have memory tagging yet like Tensor (and recent flagship Exynos/MediaTek now), but pretending it did, we'd need to solve a lot of issues uncovered by it unless Qualcomm and the device OEM heavily tested with it.

See https://news.ycombinator.com/item?id=43669913 for more info including about kernels. 6th, 7th, 8th and 9th generation Pixels share the same Linux 6.1 source tree and kernel drivers since Android 15 QPR2 in March 2025.

Pixel 9a is still using a branch of Android 15 QPR1 due to how device launches work so most of the work involved taking our last Android 15 QPR1 release from early March and rebasing it onto the Pixel 9a device branch tag where they forked off from an earlier Android 15 QPR1 release and backported current security patches to it. We then had to backport our changes since early March. The device branch will go away in a couple months and it will be on the same branch as everything else until it's end-of-life as usual. We could spend more time to integrate it into our main Android 15 QPR2 based branch ourselves but we can also simply wait a couple months. As an earlier example, the Pixel 8a was released in May 2024 based on Android 14 QPR1 rather than the current Android 14 QPR2. It was incorporated into mainline Android 14 QPR3 in June only a few weeks later. We know Android 16 is already going to deal with this so spending our time on this instead of implementing new privacy, security, usability and compatibility improvements would be a waste.


Off-topic, but as someone who's followed your work for a long time, it's cool to see you posting again! I hope you're doing better these days.


Thank you for your work and the interesting and insightful comments. I've learned a lot from you.

I wish it was feasible to run GrapheneOS on more devices. For some reason, Google seems to be incapable of selling Pixels world wide.


Thank you for all the hard work you do on GrapheneOS.


Thank you for all the work on GrapheneOS that is done.

I got it on a P7P and love it.


Am I supposed to trust security information coming from a user named strcat? /s

Quick, someone named strcat_s or strncat correct them!


Incredibly fast.

While the 9a and 9 pro are very similar, for comunity based development this is substantial.

I am often very critial, but I must give props to the grapeneos team.


They are starting from an OS that is made to work on these devices.


Reminds me of iOS and iDevices and how even after months they keep stabilising and then they fail to stabilise and then they release the next version after a year with all those bugs intact and accumulated. In this case that OS is literally made for only those devices and vice-versa, in iron claw control of one control freak corporation that has a net worth more than GDPs many countries would nuke for.

Let’s contrast that with a pure community led effort, motivated by freedom, privacy, and safety, that neither controls the OS nor the devices. It’s not what I tried to saltily explain above.

Respectfully, I hope that sunk in.


All Android devices support running the Android Open Source Project via Treble and we could quickly add support for non-Pixel devices doing things in a reasonable way too. Those devices don't meet our hardware security requirements (https://grapheneos.org/faq#future-devices) which is why we don't support them. It wouldn't be that hard to add a Sony or Motorola device but they're missing the expected security features and proper updates. It wouldn't be possible to provide our standard security protections on them which is the real blocking issue, not difficulty. Android has made device support simple, but the rest of the Android device ecosystem is not at all competitive in security with Pixels and iPhones.

We automate a huge portion of the work via https://github.com/GrapheneOS/adevtool. We do a GrapheneOS build with it and it outputs state you can see in https://github.com/GrapheneOS/vendor_state which is then used to automatically figure out all of the missing overlays, SELinux policy, firmware files, etc. When we have our own devices in partnership with an OEM we won't need to use adevtool. We can borrow a lot from Pixels for those devices though.

Pixels are very similar to each other, which does make things simpler. The entire kernel source tree is identical for 6th, 7th, 8th and 9th generation Pixels. They all use the Linux 6.1 long term support branch on bare metal and the Linux 6.6 branch for on-device virtual machines. They'll likely advance to new Linux kernel branches together rather than ending up very split across different ones as they were in the past. That makes things easier for us.

Pixels also share most of the same drivers for the SoC and lots of other drivers. Those drivers support the different generations of hardware with the same codebase for the most part. There are still 6 different Wi-Fi/Bluetooth drivers across them but 5 of those are variations of a very similar Broadcom Wi-Fi/Bluetooth driver and only 1 is a separate Qualcomm Atheros driver (Pixel 7a).

We have various hardware-based hardening features such as our hardware-based disabling of the USB-C port with variations across different hardware generations (https://grapheneos.org/features#usb-c-port-and-pogo-pins-con...) and similar features. Our exploit protection features also uncover lots of memory corruption bugs across devices in their drivers. We do have a lot of device-specific work fixing uncovered bugs. Hardware memory tagging in particular finds nearly every heap memory corruption bug occurring during regular use including out-of-bound reads so that finds a lot of bugs we need to handle. Many of the bugs we find with hardware memory tagging and other memory corruption exploit protections are in drivers or the portable Bluetooth software stack which is thankfully one of the components Android is currently gradually rewriting in Rust along with the media stack.

If we supported a device with much different drivers, there wouldn't be much work to deal with that directly but enabling our features like our hardware memory tagging support would require fixing a bunch of memory corruption bugs occurring during regular use across it. Supporting other Android devices with the Android Open Source Project is easy. Supporting them with GrapheneOS is significantly harder due to various hardening features needing integration at a low level along with others uncovering a lot of latent bugs which were occurring but not being noticed most of the time. The ones which get noticed often due to breaking things get fixed, but many latent memory corruption bugs remain there unless the OEM heavily tests with HWASan or MTE themselves, which is unlikely. Pixels are tested with HWASan and MTE by Google but yet we still have to fix a lot ourselves largely because testing them in a device farm is different than users actually using them with assorted Bluetooth accessories, etc.


Thank you for all the insights.

Nice to know that all supported Pixel phones are not only on the same kernel version, but actually are build fro the same source tree now.

Do you also contribute your fixes back to the upstream projects like the upstream Linux kernel, AOSP or Google?

Many of the security features you are using are already included in AOSP, why does Google not activate them by default? Do they have a different balancing of performance, stability and compatibility on the one side and security on the other? I understand that Google has a different view on privacy for business reasons.


> Do you also contribute your fixes back to the upstream projects like the upstream Linux kernel, AOSP or Google?

We've made significant contributions to the Linux kernel, AOSP and Pixels in the past. We continue doing it to the extent that it helps GrapheneOS users. We no longer spend our time doing work for them if it doesn't have a clear benefit to our users.

Android's security team previously got us security partner access and was in the process of getting us OEM partner access. Android's partner management team blocked us from getting OEM partner access and revoked our security partner access. Due to this, we've reduced our reports of vulnerabilities upstream and have fixed numerous vulnerabilities in GrapheneOS without reporting them. We still report all firmware and hardware vulnerabilities but we make a decision about reporting software vulnerabilities solely based on what's best for GrapheneOS users.

> Many of the security features you are using are already included in AOSP

That's not really the case. The vast majority of our privacy and security features were developed for GrapheneOS. Features being built on top of standard functionality doesn't mean that they're present in AOSP.

For example, GrapheneOS has our own integration of hardware memory tagging into our hardened_malloc project and we turned the overall feature into something which can be used in production without making the OS unusable. We had to fix issues with the standard hardware memory tagging integration in the OS and Vanadium (Chromium) along with fixing numerous bugs uncovered by it. We had to integrate it into our user-facing crash reporting system and had to create a system for opting into using it for user installed apps where users can enable it for either specific user installed apps or for all user installed apps with a per-app opt-out. Android has the foundation for hardware memory tagging support but it's not used as a production feature for hardening and we're not simply enabling it. We have a much better userspace implementation in hardened_malloc and while we currently use the standard kernel allocator integration, we want to improve that to be closer to what we do with it in hardened_malloc. We currently use the standard Linux kernel MTE integration for the kernel allocators and the standard Chromium PartitionAlloc MTE integration but both need to be improved to provide better security properties as hardened_malloc does. They're also missing the other forms of hardening used by hardened_malloc which go nicely with memory tagging. The stock OS developer option for memory tagging is not the same thing and only makes it available for usage without actually using it. It has to then be enabled via ADB but there's no way to use it everywhere we do or in the same way we do. AOSP having that doesn't mean it provides what we do with it at all.

> Do they have a different balancing of performance, stability and compatibility on the one side and security on the other?

Yes, but you're wrong about where our privacy and security features come from.


?

As in the OS in question is GrapheneOS itself?


i assume referring to aosp


Right, and that would actually be a mountain and a monumental achievement.

But one could argue that the team can move fast on 9a because they can piggyback existing (GrapheneOS) distros.


Also, this is the first pixel after this announcement:

https://news.ycombinator.com/item?id=43485950


The changes are overstated and it really changes very little for GrapheneOS. See https://discuss.grapheneos.org/d/21315-explanation-of-recent....

Android already published the full source code for Stable releases but barely anything for Beta releases. They didn't publish anything for upcoming releases with support for new devices. AOSP main branch received most changes through the Stable releases being merged into it. Most components were developed internally. Certain components were developed publicly in AOSP so they had to repeatedly merge back and forth between the internal and public main branches.

GrapheneOS would benefit from having early access to the monthly, quarterly and yearly releases as Android OEM partners do. The only benefit of having access to the main development branch would be the ability to backport more fixes than they do along with doing it earlier. We occasionally backported fixes from AOSP main for the few components developed publicly through it, which is what's going to be mostly going away. It was a big help to us as access to the upcoming quarterly and yearly releases would be. Monthly updates are too small for it to really matter but it'd still help.

Also worth noting the monthly security patch backports (Android Security Bulletins) are a separate thing from the new OS release each month. Those are backports of many of the High and Critical severity patches to older Android versions, often a month or two after they were released in the actual OS releases each month which are the monthly, quarterly and yearly releases (it's one of the 3 each month, with 3 quarterly releases and 1 yearly release per year although Android 16 is coming earlier this year instead of a 3rd quarterly release).


Oh wow, how did I miss that! If strcat / Daniel Micay happens to pass by: How much will this impact future development of GrapheneO?


The changes are overstated in the media and has little impact on it. See https://news.ycombinator.com/item?id=43674145. We would benefit a lot from early access to quarterly and yearly releases but that hasn't ever been public and the AOSP main branch only provided the most recent changes for a few components, and not any of the ones we actually need most.


Thanks!


Anyone know why drivers in this OS can't be ported to Linux, so it could support newer phones as well?


Android Open Source Project and operating systems based on it like GrapheneOS are Linux distributions. The kernel drivers are Linux kernel drivers. The userspace drivers are part of Android's Treble hardware abstraction layer providing forwards compatibility with future Android releases and SELinux-based sandboxing with the drivers split up into isolated processes. Most of the driver complexity is in userspace with most kernel drivers acting as shims between the isolated drivers and the hardware. It's done that way for practical reasons on Android but it's good for security.

Treble's compatibility system isn't very relevant to us right now. There's a new Android release every month: a monthly, quarterly or yearly release. The devices we currently support (Pixels) receive each of these updates officially. Most Android devices do not get the monthly or quarterly updates, only the yearly ones. Other devices rely on partial backports of security patches (Android Security Bulletins) to an initial release, which are provided for ~3-4 years after the initial yearly release. If we supported typical Android devices with that situation, then we'd at least partially rely on Treble to provide the latest OS version. Pixels are currently the only devices meeting our hardware security requirements listed at https://grapheneos.org/faq#future-devices. Having proper updates for 7 years from launch is just part of that, most of the requirements are for hardware-based security features like the secure element features, hardware memory tagging, pointer authentication, USB controller support for blocking new connections and disabling USB data, etc.

GrapheneOS uses the 6.1 and 6.6 long term support branches of the Linux kernel with 6.12 as the next one that's likely going to be used to replace 6.6 for on-device virtual machines and the emulator with Android 16.


> The kernel drivers are Linux kernel drivers.

But they're drivers that are not upstreamed and which therefore make it hard to move to a newer kernel, right?


> But they're drivers that are not upstreamed and which therefore make it hard to move to a newer kernel, right?

It's no harder than it would be dealing with them if they were upstream. Google ports all the Pixel drivers to newer LTS branches and a recent branch of the mainline kernel themselves.

With the recent Android 15 QPR2 release last month (March 2025), 6th/7th generation Pixels moved from the 5.10 branch to 6.1 and 8th generation Pixels moved from 5.15 to 6.1. 6th, 7th, 8th and 9th generation Pixels share the same kernel and kernel driver source tree now. They have them ported to 6.6 and mainline kernels too, it's just not ready to ship yet. 6.6 is used for virtual machines run on the devices and the emulator. 6.12 will be what's used for Android 16.

You can see at https://android.googlesource.com/kernel/google-modules/aoc/+... that they still port the 6th gen Pixel drivers to newer mainline kernels. They're ported to 6.13 and probably moving along to 6.14 soon. It doesn't mean they're going to ship them. Only LTS branches make sense to ship and they need long term stabilization first. The likely model is going to become ~12 months of stabilization and then ~12 months of usage since LTS kernel branches are moving back to 2 years of support. It was essentially increased to 6 years for 6th gen Pixels having 5 years of support, but they moved along to upgrading to newer LTS branches for 8th gen Pixels moving to 7 years of support. Greg KH works for and with Google on the LTS kernel maintenance / testing so it's actually quite Android centric rather than server centric. Long term support desktop/server distributions historically maintain their own LTS branches so they're not really the ones involved in that for the most part.

Drivers that are upstream don't actually get much free maintenance and testing. People making API changes blindly update the drivers without testing them. They get lots of updates and refactoring, but not much ongoing maintenance. Maintainers still need to deal with it. It's very common for upstream drivers to continuously regress. They can go a long time without actually working properly across releases due to how few people use them. Most people are using distributions with frozen package versions like Debian, not a rolling, and people using a rolling release like Arch Linux can use an LTS kernel branch to avoid a lot of this. The drivers for embedded hardware and things not used much by enthusiasts on desktops often break without it being noticed.

Android made a Generic Kernel Image system with ABI stability for drivers which does not benefit Pixels because they update the drivers to match the latest GKI kernel they ship. Similarly, Pixels don't really need the Treble HAL ABI forwards compatibility because they update all the vendor code to the latest monthly, quarterly and yearly OS releases anyway. It's helpful that drivers don't need to add all the new standard features to keep providing working support for new OS versions though. It's nice having it all nearly all neatly standardized and stable. We like Treble because of the sandboxing. The forwards compatibility benefits are largely unrealized because the vendors needing it aren't doing updates much anyway. Qualcomm is moving to 8 years of full update support for Snapdragon to partially match Pixels anyway.


Two thoughts (and a half, I suppose).

First: I did momentarily forget that you're only targeting Pixel devices that are actively getting updates from Google. In light of that, so long as Google stays on top of maintaining those devices, yeah in your case that's probably fine. I'm somewhat accustomed to less responsible vendors and a lot of my views are shaped by that.

That said, I'm not wholly convinced that Google's downstream kernels are as good as running from upstream. AFAICT, for example, GrapheneOS is currently shipping a kernel for the Pixel 6 that's 3 minor versions behind. Trying to track things through the Android development process is... unintuitive... to me, so forgive me if I've missed something... If I go to https://github.com/GrapheneOS/device_google_raviole-kernels_... , grab a .ko file that shows as being committed yesterday, and run modinfo on it, I get a version of 6.1.131 which https://cdn.kernel.org/pub/linux/kernel/v6.x/ChangeLog-6.1.1... says is from March 13 and has been superseded by 6.1.134 at this writing (from checking https://www.kernel.org/ ). Contrast https://archlinux.org/packages/core/x86_64/linux-lts/ which says that Arch's LTS kernel is at 6.12.23 which is the latest of that line. EDIT: Actually, the much better comparison is that Debian 12 is shipping 6.1.133 according to https://packages.debian.org/stable/kernel/linux-image-arm64 now. So the super stable slow moving distro is in fact still ahead of Android, even slightly lagging as it is.

As to breakage/testing... Yes, someone has to test new versions. Ideally that'd be a CI farm flashing and testing devices, but I appreciate that it's not exactly a trivial problem. Nonetheless, if that results in Graphene not shipping the latest bugfixes, I feel like that's an extremely awkward position to be in.


Having the kernel that is being shipped to customers be less than 1 month behind is not that bad.

>So the super stable slow moving distro

Slow isn't referring to the speed Debian takes for hot fixes. It is referring to how long Debian stays hot fixing packages over updating to the latest version.


You can see that we're on the latest 6.1.134 and 6.6.87 now because Greg KH caught up the Android GKI LTS branch to the kernel.org LTS branch. That will be included in our upcoming OS release in the next couple days.

We ship GKI LTS updates quickly but the GKI LTS branch can sometimes lag behind by up to a few weeks as it was doing when you looked. You happened to look at the maximum point it lags behind. It should really be kept closely in sync with the kernel.org releases but it's almost entirely Greg KH dealing with it and he deals with a lot of other stuff too.


Linux 6.12 is not better for security than Linux 6.1 or Linux 6.6. It doesn't work that way. Newer kernels have substantially more attack surface and complexity. They also have tons of new bugs. Bug density is far higher in new or recently changed code. Bug density drops over time. However, backporting patches gets increasingly less complete for the older branches. There's a balance between the new and older branches. 6.12 is far too new to be a reasonable choice. Google already ported Pixels to Linux 6.12, etc. It's not what is shipped because it's full of serious bugs. Separately from that, if you believe using an LTS release and shipping the latest revisions means you avoid regularly having serious regressions, that's very wrong with the Linux kernel.

Pixels only recently started moving to new LTS releases and will likely be moving to a new LTS release each year going forward. Moving the older generations to 6.1 to match current devices was done in March 2025. They'll likely move along together to a new branch each year going forward.

Linux kernel LTS revisions are nothing like LTS revisions of most other projects. They're largely untested patches blindly applied by the LTS maintainers based on patches to mainline being marked for stable backports. If the patches apply cleanly, they ship. If they don't apply cleanly, they largely don't ship. Whether it works is an open question.

> GrapheneOS is currently shipping a kernel for the Pixel 6 that's 3 minor versions behind

That's not quite right.

We're shipping the latest Linux 6.1 and Linux 6.6 GKI LTS branch releases from Greg KH. They're currently in between 2 upstream revisions upstream, not on a specific one. The devices all use both 6.1 and 6.6, not just 6.1. They use 6.1 for bare metal and 6.6 for virtual machines. Even the Pixel 6 has been ported to 6.13 by Google but that doesn't mean that's a stable enough kernel ready to ship.

The Android kernel branches also have a bunch of backports not included in the kernel.org LTS releases, including security patches and improvements they decided were important. Google does their own backporting and fixes in the GKI branch and Greg KH merges those into the GKI LTS branch. The kernel branch we use is the combination of the Google GKI backporting/fixes with the kernel.org backporting. The kernel.org LTS releases are far messier than you realize, and combining these things is messy too.

Linux LTS kernels are not very well tested and have tons of regressions. Quickly updating to the new LTS versions is problematic and we regularly encounter major regressions, especially in certain areas like f2fs and USB. We still update right away to the new GKI LTS versions. We're currently on the latest GKI LTS releases for each branch.

You'd have to ask Greg KH why there are still delays despite Google supporting it. It still seems to be him doing most of the kernel.org LTS and also GKI LTS work by himself, without nearly as much review or help by others as you would think. This is also tied into the severe regressions regularly happening with the LTS releases. Those can be security regressions too. Immediately updating to them is not necessarily a great idea with how much goes wrong at the moment.

They unfortunately sometimes lag behind the kernel.org releases. We used to merge the latest upstream kernel.org LTS releases ourselves but Greg KH convinced us we don't need to do that anymore and should just use the GKI LTS branch instead. We're not completely happy with it since it's not fully kept in sync but we're using our resources elsewhere at the moment.

> Actually, the much better comparison is that Debian 12 is shipping 6.1.133 according to https://packages.debian.org/stable/kernel/linux-image-arm64 now. So the super stable slow moving distro is in fact still ahead of Android, even slightly lagging as it is.

Debian is usually further behind than Greg KH's GKI LTS branch. Comparing at a snapshot in time doesn't mean much. GKI LTS branch should really be kept in sync but the GKI ABI stability system makes maintenance hard and is entirely worthless for Pixels. We would prefer if the whole GKI system did not exist. For Pixels, the kernel image and all the drivers are built with the same kernel source tree for Pixels so the whole system for driver ABI compatibility is just making things more complex and worse.

> As to breakage/testing... Yes, someone has to test new versions. Ideally that'd be a CI farm flashing and testing devices, but I appreciate that it's not exactly a trivial problem. Nonetheless, if that results in Graphene not shipping the latest bugfixes, I feel like that's an extremely awkward position to be in.

We do heavily test them. Our community helps with it. We OFTEN find regressions in the new LTS kernels. It often takes months before the issues we find and work around get fixed upstream. It's worth noting that due to mistreatment we've largely stopped helping them except for firmware, hardware or things we don't want to maintain downstream for some reason. It would be better if everyone collaborated on maintaining LTS kernels but instead it's largely 1 person and a couple others doing it with support from Google for testing, etc.


Right, no, I wasn't particularly objecting to 6.1, I was pointing at the patch level on it. I would personally take [quickly checks kernel.org] 5.15.180 (latest 5.15) over 6.1.130 (not-latest 6.1), because I'm more concerned with bugfixes than feature releases at this point. If the GKI LTSs are backporting fixes, that may well cover it, although that starts to veer into making me nervous because I rather agree with

> Linux kernel LTS revisions are nothing like LTS revisions of most other projects. They're largely untested patches blindly applied by the LTS maintainers based on patches to mainline being marked for stable backports. If the patches apply cleanly, they ship. If they don't apply cleanly, they largely don't ship. Whether it works is an open question.

I'm also not super fond of frankenkernels. And... I'm confused how you feel about them? If backports suck, shouldn't you want to be chasing the very bleeding edge? I wasn't originally intending to argue that everything had to ride the very latest and greatest, but if backporting is inherently fragile and bug-prone, shouldn't you want to be on the very latest stable version (so as of this writing, 6.14.2)?


We want to be on an LTS branch, but we'd prefer to be on a newer LTS branch. We would currently want to be on 6.6 with a near future move to 6.12 once it was stabilized enough. However, we would much rather be on what Google is heavily testing than using a kernel they're not using on their CI infrastructure and production devices.

Pixels moving 6th, 7th and 8th gen devices to 6.1 happened in March 2025. It's the first time they've moved to new LTS branches. It's likely going to move to using a newer LTS release where it would currently be using 6.6 and then moving to 6.12 after the next one comes out. We expect they move to having around a year to stabilize the new LTS and then use it for around a year before moving to the next. That fits nicely into the new 2 year lifespan for LTS kernels. This is a transition period. Once the longer than 2 year LTS kernels are gone, the quality of the LTS kernels will rise because there won't be as many to maintain. There are currently too many combined with too few people working on it. Greg KH having to handle both the kernel.org LTS and GKI LTS doing a huge portion of the work is clearly a problem. We'd also like to see the end of GKI ABI stability but that's highly unlikely. Yearly moves to new LTS kernels will at least make it a lot better.

> I would personally take [quickly checks kernel.org] 5.15.180 (latest 5.15) over 6.1.130 (not-latest 6.1)

Latest 5.15 has far fewer fixes backported for the same time periods than 6.1. The missing fixes in 5.15 are far more than a couple minor revisions. Similarly, 6.6 has more than 6.1 for the same time period and 6.12 has more than 6.6.

> And... I'm confused how you feel about them? If backports suck, shouldn't you want to be chasing the very bleeding edge? I wasn't originally intending to argue that everything had to ride the very latest and greatest, but if backporting is inherently fragile and bug-prone, shouldn't you want to be on the very latest stable version (so as of this writing, 6.14.2)?

Linux kernel code quality, review and testing is quite low. The bleeding edge kernels are nearly unusable in production for this use case (users running all kinds of software, using all kinds of different Bluetooth, USB, etc. accessories and so on while caring deeply about battery life) and have a ton of newly added security bugs which aren't found and fixed yet. We think that's a much bigger issue. We happily use Arch Linux for a lot of stuff but we use the LTS kernel package which is 6.12 at the moment.

If LTS quality was increased substantially, then we'd want to be on the latest LTS branch a while after the initial release, i.e. 6.12.15+ or so. However, at the moment, some serious regressions take them so long to find that it's still too new. We have high stability requirements where we can't have niche USB functionality, Bluetooth, video recording, etc. functionality regressing. The out-of-tree drivers are an area we don't have as much pain with since they're nearly all made for Exynos / Pixels and the drivers from the vendors so changes actually get tested well. The regressions are in the upstream code. More stuff coming from upstream would make LTS updates more, not less, painful, other than the GKI ABI stability nonsense we don't want.


Ton of information here. But was hoping to find out how mobile Linux distros (mobian, postmarket, pureos, new ones) could support newer phones, like these Pixels. I still don't know after reading this thread. :-D

I don't want to use Android, I want to use Linux and Phosh or similar. But so far, the supported hardware is junk.


GrapheneOS is a mobile Linux distribution. It's not systemd and GNOME which makes it a Linux distribution but rather the Linux kernel. There's nothing stopping people from running a traditional desktop Linux software stack on the same hardware we support. That doesn't interest us since it would be a massive privacy and security regression from the Android Open Source Project. It would also give a lot of usability, robustness and the huge mobile app ecosystem including a large number of open source mobile apps.

The Linux kernel is increasingly the elephant in the room when it comes to security and hasn't experienced anything like the massive progress made in Android's security in userspace. Piling on many more exploit mitigations to the Linux kernel won't really change this. We need to do a lot more work on it than we already do.

GrapheneOS has hardware virtualization support, which is going to be one of the ways to avoid depending so much on the Linux kernel's fragile security. The main usage for it in GrapheneOS will be running nested GrapheneOS instances for better sandboxing rather than running other operating systems. Android supports using the virtualization support to run other operating systems via the Terminal app and we have support for GUI applications, speaker, microphone and opt-in GPU acceleration backported to the Terminal app. The main use case for that app will be running desktop applications from other operating systems for the desktop mode. Windows 11 support would be a compelling addition to it and we may implement that in the next year or so.


I'd like a mobile OS where I can reuse my existing knowledge. Write software for it with Rust, Python, pipewire, systemd, Wayland, etc. Login with ssh.

No interest in Android apps or Windows (hah). (Though maybe I'll try Waydroid one of these days.)

I don't know anything about your distro, not the graphics stack or what the package manager is or even if there is one? Meanwhile my starlite tablet is awesome because it works just like my desktop Fedora or Mint, though I installed Phosh on it.

Security is nice, but not before there is even a single feasible device on the market. Librem is just barely limping along, and I mean barely with a five year old handset that was obsolete when it debuted.

If your kernel is so advanced it really should be upstreamed, so these other distros could use it and support new Pixels. Y'all working together with other mobile projects would be so much better than the current surveillance dystopia we are currently living in. Maybe it's hard, but it is incredibly important. I can help though have limits.


GrapheneOS is an Android/Linux distro, not GNU/Linux or musl+busybox/Linux; I suspect most of their security work isn't portable to the unixy Linux distros.


I don't care much about security. I do care, but not as much as getting a modern phone working with Linux. It can be hardened once it is working.


I read this whole thread again and it seems that the answer to the original question, is: the pixel drivers are maintained outside of the kernel tree by Google, and not these Graphene folks.

Sounds like I should complain to them instead. Yet they are known for being unreachable.


Sounds like Android is making a microkernel out of Linux.


There are some huge drivers from companies like Broadcom and Qualcomm. There's still a massive amount of kernel driver code along with the massive amount of core kernel code. Android is the main driving force behind the push for Rust for Linux kernel drivers because Google wants all these SoC and device drivers to start being written in Rust for security reasons but also for stability too. Driver memory corruption bugs are a huge source of both security and stability issues. A majority of new code in Android releases since Android 13 has been in memory safe languages. The Linux kernel is increasingly the elephant in the room with the monolithic kernel driver (zero isolation / sandboxing within the kernel) combined with memory unsafety. It's largely the opposite of Android's extensive work to improve security in userspace. They've worked hard on getting exploit mitigations into the kernel but those tend to be much weaker than they are in a lot of userspace (browser renderer processes have similar issues).


For entire classes of applications, you can treat Linux as a black box. Things like syscalls, /proc & /sys, are all incredibly stable. So that's what Go does with its syscall package; it completely sidesteps libc, ld.so, and whenever it can, it just produces static builds - at least on Linux.

They tried to get away with it on OpenBSD, macOS, etc and got their hand chewed off.


There are a few reasons I avoid Go, and their syscall pacakge is on the list. It breaks a bunch of tooling that requires LD_PRELOAD or static linker interposition. (One example: Interposing on the clock to test timeout logic.)

I wish they had an option that just used libc. Maybe, someday someone will add a target architecture/port that just uses posix. I’d prefer that in Linux too.


Intercept at the syscall level instead with seccomp. Like I'm doing here:

https://github.com/lukasstraub2/intercept-anything

Go is hardly the only thing where LD_PRELOAD doesn't work.


That's not Go-specific, it's just how static linking works. As nolist_policy advised, it's more useful and precise to intercept programs at the syscall (rather than library call) boundaries. Programs are free to do all sorts of insane stuff, there was e.g. a Scheme interpreter that kept growing the stack until it caught SIGSEGV, at which point it'd run a compacting GC and reset the stack pointer. ¯\_(ツ)_/¯

Regarding LD_PRELOAD, I highly doubt that this is required by POSIX. macOS (which is a certified UNIX) uses DYLD_INSERT_LIBRARIES instead. OpenBSD (which is known for their pedantic documentation) uses LD_LIBRARY_PATH, doesn't mention any standards, and refers the reader to SunOS 4.0. If this is somehow standardised, I'd love to read the actual document.


SEEKING WORK | Boston, MA USA | REMOTE | Technical Copywriter

Have a new product launching soon? Or some content you have been meaning to get to?

Hi, I'm Jeremy, an ex-software engineer who writes about tech. After helping build a cloud storage unicorn, I got into teaching developers and found out that I'm pretty good at making complicated stuff make sense. Now I write technical content mainly for SaaS companies. Happy to also chat about open-source, cloud architecture, or where tech content is heading.

If you want to learn more, reach out to me at mail@jeremybarbosa.com

--

My Main Services

- Blog Writing: I turn complex tech topics into articles people actually want to read. Whether it's deep dives for your engineering audience or guides that won't scare off beginners.

- Topic Planning: Not sure what to write about? I look deep into what your users care about and map out content that'll actually help them while helping you grow.

- SEO That Makes Sense: No keyword stuffing here - just strategic writing that helps you show up when people Google their problems.

--

Read more on my website: https://jeremybarbosa.com


So happy to read this because I don't see it mentioned often enough.

I have a ErgoDox EZ, and I still prefer using my Framework 13 (with Kanata![0]) because having my thumbs navigate the trackpad is so convenient even with a keyboard-driven setup.

[0] https://github.com/jtroo/kanata


I have a touchpad in between the halves of my Ergodox EZ. It's not exactly as easy to reach as a laptop touchpad, but it's worlds better than moving your shoulder to reach a mouse.


SEEKING WORK | Boston, MA USA | REMOTE | Technical Copywriter

Have a new product launching soon? Or some content you have been meaning to get to?

Hi, I'm Jeremy, an ex-software engineer who writes about tech. After helping build a cloud storage unicorn, I got into teaching developers and found out that I'm pretty good at making complicated stuff make sense. Now I write technical content mainly for SaaS companies. Happy to also chat about open-source, cloud architecture, or where tech content is heading.

If you want to learn more, reach out to me at mail@jeremybarbosa.com

--

My Main Services

- Blog Writing: I turn complex tech topics into articles people actually want to read. Whether it's deep dives for your engineering audience or guides that won't scare off beginners.

- Topic Planning: Not sure what to write about? I look deep into what your users care about and map out content that'll actually help them while helping you grow.

- SEO That Makes Sense: No keyword stuffing here - just strategic writing that helps you show up when people Google their problems.

--

Read more on my website: https://jeremybarbosa.com


SEEKING WORK | Boston, MA USA | REMOTE | Technical Copywriter

Have a new product launching soon? Or some content you have been meaning to get to?

Hi, I'm Jeremy, an ex-SWE who writes about tech. After helping build a cloud storage unicorn, I got into teaching developers and found out that I'm pretty good at making complicated stuff make sense. Now I write technical content mainly for SaaS companies. Happy to also chat about open-source, cloud architecture, or where tech content is heading.

If you want to learn more, reach out to me at mail@jeremybarbosa.com

--

My Main Services

- Blog Writing: I turn complex tech topics into articles people actually want to read. Whether it's deep dives for your engineering audience or guides that won't scare off beginners.

- Topic Planning: Not sure what to write about? I look deep into what your users care about and map out content that'll actually help them while helping you grow.

- SEO That Makes Sense: No keyword stuffing here - just strategic writing that helps you show up when people Google their problems.

--

Read more on my website: https://jeremybarbosa.com


>The City was among America’s premier trains, a luxury streamliner that could hit 110 miles per hour while white-jacketed waiters balanced trays of cocktails

I wonder how passengers back then would have imagined rail travel today, 75 years later (aside from the life-threatening storms, of course). The Overland Route is now freight-only, and the closest equivalent, the California Zephyr, takes about 52 hours to make the journey this train did in just 40!

More on topic, I was surprised to read:

> When the steam generators’ water tanks ran dry, heat disappeared, too.

Weren't there surrounded by frozen water? Is there any reason snow couldn't be used in an emergency to heat the train?


> I wonder how passengers back then would have imagined rail travel today, 75 years later (aside from the life-threatening storms, of course). The Overland Route is now freight-only, and the closest equivalent, the California Zephyr, takes about 52 hours to make the journey this train did in just 40!

I don't think people ride the California Zephyr to get from Chicago to the Bay Area as quickly as possible. Most of us spent as much time as possible in the observation car marveling at the Rockies and Sierras.


> I don't think people ride the California Zephyr to get from Chicago to the Bay Area as quickly as possible.

Of course they don't. It's too slow. Our rail shouldn't be as bad as it is.

I love that trip, and I've taken it more than twice, oohing and ahhing all the way, but I do not need it to last as long as it does.


A few reasons that occur to me:

1. The volume of snow to be collected would have been significantly greater than the resulting water.

2. Heating snow at elevation requires more energy.

3. Perhaps getting snow into the steam generator wasn't so easy.


> The volume of snow to be collected would have been significantly greater than the resulting water.

Yes, dependent on the nature of the snow but a broad idea is that if you want a litre of water, you need five litres of snow.


The stat I've seen is even worse at 10:1.


Depending on elevation, the type of parent storm, the ambient temperature, and other factors, the water content to snow can vary from 1:6 - very heavy chunky lake effect snow falling right at the freezing point, the kind you get wet just walking from the car to the door, to 1:12 , the kind typically seen in mountainous, more semi-arid locales. The fine white snowboarding/skiing snow. Generally the colder the air, the less moisture in the snow, same with height, unless it's precipitating out due to orographic uplift first.


anecdotally I think I've had to scoop about 30l of snow in a stuff sack, to get about 2-3l of melted water (of which I've probably added at least a cup or 2 of water to get started - to prevent the bottom of the pot being scorched by heat before the snow melts), so that sounds about right.


> I wonder how passengers back then would have imagined rail travel today, 75 years later

Show them the airplane that gets them to the same destination in a couple of hours vs days


It was January 13, 1952 they had airlines.


OTOH, flying was considered a luxury in the USA until the airlines were deregulated in 1978. https://www.npr.org/2024/07/05/1197960905/flying-airlines-de...


Deregulation had far less impact on prices than people generally quote. Getting out of the energy crisis did a lot to shift prices quickly which made it seem like deregulation was suddenly working. https://en.wikipedia.org/wiki/1970s_energy_crisis

Longer term Airlines got a lot better and a lot cheaper worldwide at roughly the same rates because things like fuel economy and engine maintenance timescales skyrocketed.


The general consensus among economists is that deregulation had a big effect on airline prices. For example:

>...Every serious study of airline deregulation in the intervening years has found that travelers have indeed benefited enormously. As we documented in our 1995 Brookings book, The Evolution of the Airline Industry, airfares, adjusted for inflation, fell 33 percent between 1976—just before the CAB instigated regulatory reforms—and 1993. Deregulation was directly responsible for at least 60 percent of the decline—responsible, that is, for a 20 percent drop in fares. And travelers have benefited not only from low fares, but from better service, particularly increased flight frequency.

https://www.brookings.edu/articles/the-fare-skies-air-transp...


Percentage drops don’t add they multiply.

A 20% drop on its own is 20%, but if prices fell 33% it requires a separate 16.25% drop not a 13% drop. Meaning deregulation was responsible for 55% of the total decline for the numbers to work out.

PS: If they’re confused or lying in just that blurb I’d question the rest of their analysis.


33 * .6 = 19.8

My general feeling is always that if someone is going against the consensus of the experts who have studied something (whatever the issue is - it could be climate change or GMO food or effects of airline deregulation or whatever) I think the burden of proof is on the person who claims the experts are all wrong.


Try and make what you did work.

  33 * .6 = 19.8
  33 * .4 = 13.2

  1 - (19.8 / 100) = 0.802
  1 - (13.2 / 100) = 0.868

  0.802 * 0.868 = 0.696136

  (1 - 0.696136) * 100 = 30.38% discount not 33%
Thus your burden of proof. Either they are utterly incompetent or lying.


Thinking about this some more what they said is just as wrong as saying 2 + 2 = 5, but people are apparently innumerate enough not to notice.

I’m not sure if I should applaud the blazing disregard for people’s intelligence or appreciate that it works.


Lol. It is pretty clear that the key point of the summary I quoted was that the consensus among economists who have studied this was that at least 60% of the decline in airfare prices can be attributed to deregulation. You didn’t need to put in all this effort to misinterpret it. (No one was adding percentages or whatever you were claiming.) Them rounding to the nearest multiple of 10 is reasonable - not rounding would be implying a higher precision than is really warranted.

If anyone is reading this thread, it illustrates a problem common in on-line discussions. Someone will make a big claim that contradicts the consensus of the experts in the fields who have studied the issue. (I have found that when discussions touch upon economics, it is virtually guaranteed, but it happens in many areas.)

Someone then points out that if the person is correct and the consensus position of the experts who have studied this issue are all wrong, the burden of proof is on them. The original poster won’t try and do this but will instead try to come up with a reason that those who devote their careers to actually studying these issues can’t be trusted or are incompetent, etc. Unfortunately, at this stage, the person is usually even more entrenched in their personal pet theory. They usually aren’t so blatant as to say “Either they are utterly incompetent or lying.”, but that is where we are.


You’re assuming they are accurately describing results of the field. They are not and in fact the field didn’t come up with such quotable round numbers.

I can point to plenty of research that says otherwise, but when you quote someone saying 1 + 2 = 4, they don’t merit a more comprehensive rebuttal but to simply point and laugh.

The guy being quoted presumably came up with numbers from thin air which is why they both don’t make sense and don’t line up with actual research.


I am not an expert, just recalling the latest thing I heard/read about the subject but I think we can both agree that in the 1950-1960s inclusive, flying was not egalitarian as it was after the changes in operating environment we discuss.


Yeah I mean you hear about what a lot of airlines were like and many domestic first/business class seemingly don't even compare despite energy costs were significantly higher.


Airline regulation set floor prices to certain routes so I’m not sure how the energy crisis resolving would have fixed that.


Which is why I said less of an impact not zero impact.

Regulators didn’t set floor prices to wildly unreasonable levels. So yes, it did modestly lower prices and service quality because airlines now competed in different ways. But we’re talking the difference in the cost of an inflight meal etc not some wildly different number. For that you needed wildly more efficient aircraft from other companies.

Post regulations we also got lots of bankruptcies and bailouts which shifted costs from consumers to taxpayers.


> Weren't there surrounded by frozen water? Is there any reason snow couldn't be used in an emergency to heat the train?

I have tried this.

Snow is not very dense. A lot of snow makes a very small amount of water. Quite an astonishingly small amount of water

I expect the steam generators were quite thirsty, I do not know.


There were special water filling stations on many stops.

That's why currently running steam engines are better off with diesel pushing them: https://youtu.be/12Zpb0Yh-sM


UP's Big Boy got some upgrades after a year or two of touring the UP system. It no longer needs a Diesel helper. Previously, the helper engine carried the modern Positive Train Control gear, and the steam engine cab had a display connected by a cable to the Diesel helper. Now, UP 4014 has its own PTC gear and antenna in the tender, so it's self-sufficient. UP runs that engine on heavily used main line track, so it needs to be fully connected to safety and dispatching systems.

This was mostly a power problem. UP 4014 had a small steam turbogenerator atop the boiler to power lights and such. Now it has three such generators, and there's enough electric power to run auxiliary equipment.

UP did a serious rebuild on the Big Boy, to original main-line standards. Many parts were fabricated from scratch. It's ready for regular use for decades. Most heritage railroads lack the resources for such major overhauls.

Here's the first test run with no Diesel, in May 2024.[1]

[1] https://www.youtube.com/watch?v=khJZ6NO5rhQ


Oh, "PTC gear" as in equipment, not just one magical gear that prevents accidents, as popped into my mind at first!


Is PTC the first step in getting rid of human engineers, billed as a safety system?


Snow is dirty, and getting dirty water into a high pressure steam engine is an awful idea, even in the short term.


Why, where and how is snow dirty?

The much bigger issue that steam is massively less dense that liquid water: roughly a 1:10 ratio. Loading up 10x more snow than you need water is no small task.


Snow is dirty all the time, everywhere, especially near commonly-used train lines. It’s not difficult science. Particulates, dead insects, leaves, mouse shit — all end up in snow. Scoop up a bucket of snow and melt it and I guarantee you’ll see a bunch of crap in the bottom of it.


Is that true at 7k ft on a train line that only has 1 train on it? I can understand next to a busy freight line or something like that but it seems like freshly blown snow (to the volume that it stops a train) wouldn't have much in it. I can't say I know the purity of snow at that altitude though.


A 12 foot snowslide might have trees in it.


You must be thinking of a different kind of snow than I am. Have you ever been through Donner Pass?


Yes. Every piece of snow crystallizes around a piece of dust or something else in the air. Even the prettiest snow in the world will have some "junk" in it after you melt it. Not to mention all the stuff that falls off nearby trees and passing trains.


I’ve played in snow on most continents, in the wild and the urban. Never have I seen snow clean enough to put into a steam engine. You may be underestimating (a) how important clean water is in this context, (b) how much particulate matter there is in the air and, thus, in snow.


I don't really have time for a deep dive but from a brief look these are diesel electric trains. The water and steam we are talking about here is likely a simple boiler where the steam is only used to deliver heat to the carriages.

This is similar to heating systems installed in skyscrapers of the period (I'd definitely recommend this video if you haven't seen it https://youtu.be/nkgM0qCy5o4?si=46vNv6aaoYHcDO2l).

After the steam condenses the water in a skyscraper is trivially returned to the boiler (by gravity). However, on a train I suspect they run it as a total loss system and the condensate is simply discharged when it reaches a trap.

This whole system is relatively low pressure and, more important, low velocity so it's unlikely it would have caused an immediate issue (the train would obviously have required work before going back into service in any case).

I think the problem is more likely to have been an inability to collect enough snow to make a meaningful amount of water, in addition, it would likely have needed to be liquid to introduce it to the boiler, you can't just shovel it in.


I think you had a typo, you meant "snow" where you wrote "steam".

For the water:steam ratio, obviously it's an expansion and I think it's around 1:1,600. Steam wants space.


> The Overland Route is now freight-only, and the closest equivalent, the California Zephyr, takes about 52 hours to make the journey this train did in just 40!

I mean, this is largely a product of the US's general disinterest in and underinvestment in passenger rail; with a modern high speed system it'd be about 10 hours.

10 hours is _probably_ too long to be particularly useful, mind you; people would just fly. The sweet spot for high-speed rail is more in the 5 hour and less range; at that point when you factor in the faffing around involved in getting to airports, going through security, the inevitable delays etc, the train is still faster.

The longest high-speed route in the world is about this length: https://en.wikipedia.org/wiki/Beijing–Kunming_high-speed_tra...


It's doubtful that you could build a high speed rail that could fly from Chicago through the Rockies to California in 10 hours. Even more doubtful that the cost benefit would be worth it considering you can fly from Chicago to the Bay Area in 3.5 hours. And you can't factor time spent getting to airport and not factor time to train station and time to stop at other major cities on the way (as trains are wont to do).

The Chinese route you mentioned does not need to go through one of the largest mountain ranges in the world. It's also at least 15-20% shorter than the distance from Chicago to SF, and experiences much less elevation change over the course of the journey. And the wiki article claims it "averages 10.5 to 13.5 hours", so there is a huge amount of variability in time to travel on that route.


> And the wiki article claims it "averages 10.5 to 13.5 hours", so there is a huge amount of variability in time to travel on that route.

Yeah, I think it depends on how many stops it calls it; there are a few different services on that line. While it's a high speed line they're mostly not classic express services and actually have quite a few stops. I'd expect a notional Chicago->California high speed line would have fewer. A journey with no stops at all at 300km/h (ie high standard high speed rail, but not absolute state of the art) would be 10 hours; any stops would add a bit.

> And you can't factor time spent getting to airport and not factor time to train station

As a general rule, airports are not hugely conveniently located. Normally intercity rail in big cities will depart from a central train station, which usually will really be quite central, and will be linked into all the other transport. You get there, and walk onto the train, and you're done.

The airport will _never_ be central, for obvious reasons, and if it has a rail line at all, it will likely be a single line, usually relatively infrequent, and, for some reason, with the airport end almost always extremely inconveniently located (this seems to be a law of nature). You'll want to get there at least an hour in advance, and the plan will likely be delayed at least somewhat on both ends. At least one queue will be involved. On the other end, you will then make your way slowly into the city.


10 hours are perfect for a night train


RAMFS is a genius idea. That solves most of the SD card health and speed issues without needing to get a whole hard drive. I know Puppy[0] and MX Linux[1] were made to run like that too.

[0] https://puppylinux-woof-ce.github.io/ [1] https://mxlinux.org/


I used to run a Pi as a Wireguard entrypoint to my home network, I made the filesystem read-only and created a RAM disk and moved logs and any other writes to it to protect the longevity of the uSD, it had the added benefit of security of a read-only FS. I'd remount it r/w occasionally and run updates. It ran flawlessly for years (at a time when I was killing a Sandisk uSD in Pis roughly once per week) until I decommissioned it.


I've been running a public facing weather website on my RPi2 since 2014. I am still on the same SD card since all HTML assets and logs are on tmpfs. The only thing being written to the SD card are entries to the DB once every 5 minutes.


That's pretty sweet.

My suspicion about the many uSD cards I've killed is power issues, power loss, etc. In terms of wear, I don't think a typical Pi would be doing enough to wear them out unless it was being hammered.


Your kernel will already cache all IO in memory.

You can decrease the to-disk syncing to e.g. once per day.

```sysctl.conf vm.dirty_writeback_centisecs = 86400000 ```


Thanks for this information. That's actually really helpful for me to know in my server administration dealings. I _hate_ disk IO and disk thrash. I was aware of kernel stuff but completed eluded me that I could modify it. I put it to 7200


Good point, but I was more talking about distros designed for a smooth day-to-day experience. A user would probably want something like SquashFS (to save space on the SD card) and ZRAM (to conserve RAM) since all their files would be living there.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: