Hacker Newsnew | past | comments | ask | show | jobs | submit | kawsper's commentslogin

I have one of those somewhere, I thought it was a cool piece of tech history.


Me, too! I worked at Sun from 2002-2004, and some of us got them as pointlessly fancy door badges for datacenter access. In hindsight it was such a novelty, almost a gag, but they were kind of awesome for what they were. And you felt like an absolute badass when using one to badge in!


AYN Thor looks quite promising in that category https://www.ayntec.com/products/ayn-thor

It’s an ARM machine running SteamOS.


Your link says it runs Android?


You're right, I was mistaken, I've seen some Youtubers playing games on it, but they use GameHub to run Steam games, somehow I thought it was running Steam OS.

Sorry for the confusion.


It’s not just you. My Firefox, with no extensions, have struggled on YouTube the past weeks.

Sometimes I can’t even click on the front page, sometimes when I open a video it refuses to play.

I don’t know what’s up, but it works in chrome.


I also had it stop working completely. I thought they finally wised up to my adblocker, but I decided to finally install that update I had been sitting on for a while and it just started working again


Probably just the typical nefarious activities of YouTube. Either "accidentally" driving users to switch browsers, or experimenting with circumventing ad blockers, or negligence in testing, or who knows what.

If they want the "Google has no browser monopoly!" claim, then they should be obligated to make their services work perfectly with the alternative, instead of subtly scheming and manipulating people.

One thing you can do is to use an invidious instance. Those don't support live streams and shorts, but at least you don't have to deal with the atrocious normal YouTube frontend.


That may also just be Firefox's way of telling you it has updated and needs to be restarted.


The most irritating thing about the credit-card sized ones, are how they aren’t attached if you move around.

I like to be mobile, so I put some velcro ultra-mate on the back of my laptop, and also on my disk, then the disk can be attached and plugged in while I move around.

I also got a 90-degree USB-C cable for a more direct cable route.


Is this what we get when we stop making laptops with upgradeable internal storage?


I just upgraded the internal storage of my Lenovo T14 (AMD, Gen6) to 4TB, and that took all of 5 minutes. And that laptop was definitely made in 2025, although I agree that consumer sentiment overwhelmingly favors models that are less convenient in that respect.


Meanwhile, modern Apple users: https://youtu.be/RDBX6FTYLoQ


For me it was pure ASMR content


Same, with an x1 gen5, upgraded NVME to 1TB

This boy is 8 year old today (bought in 2017 November) and still delivers me the €€€ at $consultingjob


I still utilize large external drives on my laptop with upgradeable storage, so we get it either way.


Not really an issue outside the Apple ecosystem and a few fringe tablet hybrids like from Microsoft. Vast majority of laptops sold today have standard SSDs you can upgrade.


> Vast majority of laptops sold today have standard SSDs you can upgrade.

Though some make it quite difficult to get in to replace the drive, and put everything back together after.

Some are very easy: an obvious compartment at the bottom, unscrew lid, remove drive, put in replacement, power up and transfer old content, done. I've seen both NVMe and 2.5" SATA drives arranged this way. On the other hand, upgrading my friend's laptop recently involved taking most of it apart, the drive was under the keyboard inaccessible from the back, with other link cables (for keyboard, antenna, screen) in the way so they had to be disconnected and were in very inconvenient arrangements for reconnecting after…


>the drive was under the keyboard inaccessible from the back

Must be an old design from around ~10 or so years ago. Acer I presume.


What do you do with all that storage?

Here's the root partition (well, lvm) on a laptop I have been using for over three years now

    » df -hT ~
    Filesystem                Type  Size  Used Avail Use% Mounted on
    /dev/mapper/vgubuntu-root ext4  869G  298G  527G  37% /
I do have an external drive for backups and another for drone footage but this is it. Everything else is either fast enough in the cloud or just here.


I record video in raw, so it’s mainly dealing with video files during editing.

I want to see if I can move to prores in my import step, but I haven’t found a good workflow that allows for that.


Rust compilation artifacts.


This reminded me of my professor's laptop with a Ricochet wireless modem attached in much the same way back in the early/mid 1990s. That was an early wireless ISP prevalent in the SF Bay Area.


Maybe it is nostalgia speaking, but the SMS had a great sound chip, and some amazing composers.

My absolute favourite song is from Ninja Gaiden "Escape in a forest" (starts at 03:36) here: https://www.youtube.com/watch?v=MFoA0OICiB4&t=207s

Someone played that song with real instruments, and it's also amazing: https://www.youtube.com/watch?v=Arun9KuXImk


Probably a little nostalgia. The SMS sound chip is one of the cheapest and most primitive jellybean sound chip of the era (only 3 square waves, noise and no envelope generator either). That isn’t to say appreciating the art of doing more with less isn’t valid. It’s sort of like a MS Paint type of thing though.


I agree. I had an SMS growing up and always noticed the music sounded "cheaper" than the NES, almost childish. I think it really was just the square waves making everything sound the same. The NES had more interesting output with its triangle and sawtooth wave output and it gave it more edge and character.


It may not have had a sawtooth but it did have the DMC (sample channel) which although very quirky could create a lot of variety - and used melodically to give you, for instance, a sampled bass - or drums - or an orchestral hit!


Ooh yeah, the DMC must've been used in the Super Mario 3 soundtrack. I remember the steel drums (?) in that sounded so good for an 8-bit game.


Yes, that steel pan sound would have been done with the DMC!


The NES' own sound chip didn't have a sawtooth channel, but some games had an onboard sound chip that added one, like Konami's VRC6: https://www.nesdev.org/wiki/VRC6_audio


The Japanese Mark III had an available Yamaha FM expansion kit that could sound pretty great. US-based gamers couldn't listen to the soundtracks at the time, but emulators and whatnot make it possible to experience today.


It was cheap AF but that ends up giving it a specific aesthetic...

My all time favorite is the opening to Alex Kidd in Shinobi World: https://www.youtube.com/watch?v=9dx9AAKm6dI


> great sound chip, and some amazing composers.

I understand it as more of the latter than the former.

Hardware might not have been great, but they were dedicated to push it to the extreme limits of what it could do, and all of it was punching way above its weight in all respects.

Japanese companies saw an opening, and extremely brilliant people went in head first, sleeping under their desk to leave their mark in the field.


That’s an interesting perspective, I found out something similar when travelling as a vegan.

The limitations put up forces you to go hunt for smaller, and sometimes fringe restaurants, located off the beaten path run by passionate people.


This is true. I’m not vegan or vegetarian, but I look for restaurants that cater to those audiences when traveling. It’s probably because they’re putting a lot more attention into the ingredients, which reflects as a more thoughtful end product.


We have a family policy when traveling to never eat anywhere we could frequent at home.


I enjoy exploring vegan restaurants all over the world too! I often avoid burgers because they are easy to make I guess, and I had a lot of them over 8 years of my vegan journey. I instead look for more unique menus so that I can learn things and replicate them at home. But traveling is the only time I allow myself some fish and dairy, or maybe some eggs, no meat at all though.


Relying on an hosted image also caused some disruptions for Nomad (the scheduler from Hashicorp), because the default pause image was hosted at gcr.io which google killed, and it moved to registry.k8s.io.

The nomad team made this configurable afterwards.


That nomad was hit with this after years of notice and deprecation extension, seems a sign of serious maintenance issues


I always thought it would have been better, and less confusing for newcomers, if GitHub had named the default remote “github”, instead of origin, in the examples.


Is this something the remote can control? I figured it was on the local cloner to decide.

Can’t test it now but wonder if this is changed if it affects the remote name for fresh clones: https://git-scm.com/docs/git-config#Documentation/git-config...


If I clone my fork, I always add the upstream remote straight away. Origin and Upstream could each be github, ambiguous.


GitHub could not name it so, because it's not up to GitHub to choose.


There are places where it does choose, but arguably it makes sense for it to be consistent with what you get when using "git clone".


How is it less confusing when your fork is also on github?


Requiring a fork to open pull requests as an outsider to a project is in itself a idiosyncrasy of GitHub that could be done without. Gitea and Forgejo for example support AGit: https://forgejo.org/docs/latest/user/agit-support/.

Nevertheless, to avoid ambiguity I usually name my personal forks on GitHub gh-<username>.


No, it's a normal feature of Git. If I want you to pull my changes, I need to host those changes somewhere that you can access. If you and I are both just using ssh access to our separate Apache servers, for example, I am going to have to push my changes to a fork on my server before you can pull them.

And of course in Git every clone is a fork.

AGit seems to be a new alternative where apparently you can push a new branch to someone else's repository that you don't normally have access to, but that's never guaranteed to be possible, and is certainly very idiosyncratic.


> in Git every clone is a fork

That's backwards. In Github every fork is just a git clone. Before GitHub commandeered the term "fork' was already in common use and it had a completely different meaning.


As I remember it, it was already in common use with exactly the same denotation; they just removed the derogatory connotation.


Arguably the OG workflow to submit your code is `git send-email`, and that also doesn't require an additional third clone on the same hosting platform as the target repository.

All those workflows are just as valid as the others, I was just pointing out that the way github does it is not the only way it can be done.


Yes, that's true. Or git format-patch.


> Requiring a fork to open pull requests as an outsider to a project is in itself a idiosyncrasy of GitHub that could be done without. Gitea and Forgejo for example support AGit: https://forgejo.org/docs/latest/user/agit-support/.

Ah yes, I'm sure the remote being called "origin" is what confuses people when they have to push to a refspec with push options. That's so much more straightforward than a button "create pull request".


As far as I'm concerned the problem isn't that one is easier than the other. It's that in the github case it completely routes around the git client. With AGit+gitea or forgejo you can either click your "create pull request" button, or make a pull request right from the git client. One is necessarily going to require more information than the other to reach the destination...

It's like arguing that instead of having salad or fries on the menu with your entree they should only serve fries.


agreed, you'd need a second name anyway. and probably "origin" and "upstream" is nicer than "github" and "my-fork" because.. the convention seems like it should apply to all the other git hosts too: codeberg, sourcehut, tfs, etc


Huh. Everyone seems to use "origin" and "upstream". I've been using "origin" and "fork" the whole time.


I use "mine" for my fork.


That's really impressive and an interesting experiment.

I was about to say that Nomad did something similar, but that was 2 million Docker containers across 6100 nodes, https://www.hashicorp.com/en/c2m


If you're using Tailscale you can install it on your Apple TV (if you also happen to have one of those devices).

Now you can use your home connection as a proxy through wireguard when traveling.


Tailscale is great, but by itself is the wrong tool for the task of routing traffic over some host only for a single browser tab (but to all destinations for that browser tab), as it seems to be "all or nothing" when it comes to using a remote exit node.

It's probably possible to set up a local SOCKS proxy that knows to use some Tailscale non-exit-node for egress, and to manually allow that traffic within Tailscale and on the remote node, but not out of the box as far as I can tell.

Installing a SOCKS proxy on the remote node, reachable only over Tailscale, would be an alternative, but that doesn't work on an Apple TV.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: