Hacker Newsnew | past | comments | ask | show | jobs | submit | clan's commentslogin

Almost. To such a degree I would call it a very dark pattern.

There is however one very good argument for. Currencies with very high volatility. Think extreme inflation. If you accept their conversion you know what you pay in your own currency. You have then mitigated a risk. If your own currency is volatile then you might gamble and win. If the foreign currency is volatile you will usually win by paying in the foreign currency. If both are volatile then it is a blind gamble.

The important part here are the settlement dates. Your bank usually do not calculate the exchange rate of the eaxct purchase time.

That is the excuse for the "service". But it is still not wanted and I consider it evil.

When traveling places with rampant inflation you will notice that sellers always negotiate 2 prices. One in the local currency and one in what is considered an easy to use hard currency such as USD or Euros. Forgeries and less cash flowing around has made it harder to use other less know but otherwise hard currencies.

So sellers never care what currency you choose to settle in as very close to zero sellers have multiple accounts on the same terminal. And those who really need it will always negotiate in different currencies.

You might have experienced something like this at times when visiting Argentina or Turkey.

So the "service" is only there for those who want to understand what they pay in their own currency or mitigate a settlement date. And will pay for it!

Local terminal holders rarely care. But the ATM mafias (such as EuroNet) do very much so. Because they actively are playing the mitigation game and are allowed to add fees.

I strongly feel this field should be very heavily regulated. But too much money is involved. And if you look at where VISA and MasterCard are located you will understand that is not a regulation happy corner of the planet.


Historically (like, 15+ years ago when I did the SEA backpacking circuit) there have been some cards with ridiculous fees for international transactions. Like, a flat $10 per transaction. Back then when I saw prompts like this on card terminals I assumed it was targeted at those cardholders (or people who had heard stories of those and were unsure and worried what they would be charged and wanted to be reassured by a number in their home currency)

Just so everyone is aware, it is still considered a foreign transaction regardless of which option you pick. So if you are using a card that charges for that, then you will be charged a foreign transaction fee. It is a foreign transaction fee, not foreign currency fee.

Except American Express, which does have foreign currency fees (on some cards).

I think that’s exactly a big part of why this scam was developed. If you aren’t that informed, don’t know your credit card terms by heart, but you’ve heard about those “foreign fees” it’s very plausible that this service would save you money. Not likely of course, since the scam is obfuscated and hidden in a dollar amount presented without the computation.

I don’t agree with this.

If you’re in a place that wants dollars or euros because their currency is “bad” (volatile or unable to freely exchange for dollars), they prefer dollars. You can tell because you get a better than official exchange rate.

I have to say I’ve never been somewhere that the currency was so volatile the settlement date mattered. Carrying local currency would be part of your risk? This could only come up in the almost-all-digital-currency modern world.


As Jeff states there are really no Thunderbolt switches which currently limits the size of the cluster.

But would it be possible to utilize RoCE with these boxes rather than RDMA over Thunderbolt? And what would the expected performance be? As I understand RDMA should be 7-10 times faster than via TCP. But if I understand it correctly RoCE is RDMA over Converged Ethernet. So using ethernet frames and lower layer rather than TCP.

10G Thunderbolt adapters are fairly common. But you can find 40G and 80G Thunderbolt ethernet adapters from Atto. Probably not cheap - but would be fun to test! But ieven if the bandwidth is there we might get killed with latency.

Imagine this hardware with a PCIe slot. The Infiniband hardware is there - then we "just" need the driver.


At that point you could just breakout the thunderbolt to PCIe and use a regular NIC. Actually, I'm pretty sure that's all that to the Atto Thunderlink, a case around a broadcom nic.

Then you _just_ need the driver. Fascinating, Apple ships MLX5 drivers, that's crazy imo. I understand that's something they might need internally, but shipping that on ipadOs is wild. https://kittenlabs.de/blog/2024/05/17/25gbit/s-on-macos-ios/


That is what I am suggesting with the Atto adapter.

Infiniband is way faster and lower latency than a NIC. These days NIC==Ethernet.


What makes you think Infiniband is faster than Ethernet? Aren't they pretty much equal these days with RDMA and kernel bypass?

macOS ships with drivers for Mellanox ConnectX cards, but I have no idea if they will show up in `ibv_devices` or `ibv_devinfo`.

This actually makes me happy! I must be getting old!

It truly is a bad one but I really appreciate Kevin Day for finding/reporting this and for all the volunteer work fixing this.

All I had to do was "freebsd-update fetch install && reboot" on my systems and I could continue my day. Fleet management can be that easy for both pets and cattle. I do however feel for those who have deployed embedded systems. We can only hope the firmware vendors are on top of their game.

My HN addiction is now vindicated as I would probably not have noticed this RCE until after christmas.

This makes me very grateful and gives me a warm fuzzy feeling inside!


> We can only hope the firmware vendors are on top of their game.

You should go into comedy, this would kill at an open mic!


Even better, the reboot wasn't needed as the kernel didn't get bumped on this one. Just restart the rtsold service if you're using it and sanity check your resolv.conf and resolvconf.conf.

As for noticing it quickly, add `freebsd-update cron` to crontab and it will email you the fetch summary when updates are available


> My HN addiction is now vindicated as I would probably not have noticed this RCE until after christmas.

Always makes sense to subscribe to the security-announce mailing list of major dependencies (distro/vendor, openssh, openssl etc.) and oss-security.


Where major dependency is everything that even indirectly touches network. Doesn't really matter if the thing that gives everyone access to your systems is major or not.

If it’s a shell script fix does it even need a reboot?

Try a larger n.

They are more than capable. I have just looked at what BMW, Mercedes and Audi have on offer. Then compare what Zeekr and Xpeng has on offer (7X, G9). Quality wise they feel the same or even better.

While I agree as a "complete car" the full package might not quite be there yet. But that is from a European perspective as they mostly are focused on their home markets. But this is changing. This is then simply iterating for product/market fit.

Personally I find the major problems in chinese cars are the software. That is the easy fix and they are getting closer with each iteration.

So much that today I would choose a Zeekr 7X but choose to postpone as the software was too annoying (adaptive cruise control, lane assist, sign recognition, auto brake, audio cues).

The big loss we have with EVs are servicability. But that is a universal problem with all automakers.


I don‘t care all that much about cars, my n is soley based on the cars i got provided for business trips. One Xpeng (no clue which) was among thise vehicles.

The main problem I see (beyond my i4 just being significantly nicer to drive) is replacement parts. The chinese EV companies are replacement parts and a qualified repair network. Replacement part availability is rather bad across the board, which has a number of cascade effects, but primarily higher insurance premiums. To add, there is also the fact that it looks like the EV market in china will consolidate not too far in the future, most likely compromising long term maintainability.


or Yangwang, BYD luxury brand.

You said it better at first: Standardization.

Posix is the standard.

Docker is a tool on top of that layer. Absolutely nothing wrong with it!

But you need to document towards the lower layers. What libraries are used and how they're interconnected.

Posix gives you that common ground.

I will never ask for people not to supply Docker files. But to be it feels the same if a project just released an apt package and nothing else.

The manual steps need to be documented. Not for regular users but for those porting to other systems.

I do not like black boxes.


Why I move from docker for selfhosted stuff was the lack of documentation and very complicated dockerfiles with various shell scripts services config. Sometimes it feels like reading autoconf generated files. I much prefer to learn whatever packaging method of the OS and build the thing myself.


I can sympathize. It makes sense.

But...

As a veteran admin I am tired of reading trough Docker files to guess how to do a native setup. You can never suss out the intent from those files - only do haphazardous guesses.

It smells too much like "the code is the documentation".

I am fine that the manual install steps are hidden deep in the dungeons away from the casual users.

But please do not replace Posix compliance with Docker compliance.

Look at Immich for an unfortunate example. Theys have some nice high level architecture documentation. But the "whys" of the Dockerfile is nowhere to be found. Makes it harder to contribute as it caters to the Docker crowd only and leaves a lot of guesswork for the Posix crowd.


Veteran sysadmin of 30 years... UNIX sysadmin and developer...

I use docker+compose for my dev projects for about the past 12 years. Very tough to beat the speed of development with multi-tier applications.

To me Dockerfiles seem like the perfect amount of DSL but still flexible because you can literally run any command as a RUN line and produce anything you want for layer. Dockerfiles seem to get it right. Maybe the 'anything' seems like a mis-feature but if you use it well it's a game changer.

Dockerfiles are also an excellent way to distribute FOSS to people who unlike you or I cannot really manage a systems, install software, etc without eventually making a mess or getting lost (i.e. jr developers?).

Are their supply chain risks? sure -- Like many package systems. I build my important images from scratch all the time just to mitigate this. There's also Podman with Podfiles if you want something more FOSS friendly but less polished.

All that said, I generally containerize production workloads but not with Docker. If a dev project is ready for primetime now I port it to Kubernetes. Used to be BSD Jails .


> Dockerfiles are also an excellent way to distribute FOSS to people who unlike you or I cannot really manage a systems, install software, etc without eventually making a mess or getting lost (i.e. jr developers?).

Read what you just said:

> ... to people who unlike you or I cannot really manage a systems ...

These are people who should not be running systems.

> I build my important images from scratch all the time...

I doubt it, but assuming you're telling the truth, then you're a rare cookie because my clients don't even do that, and they're either government bodies with millions in funding or enterprises with 60,000 employees across the entire globe.

Again, the art of the operating system, and managing it, has been lost. It's been replaced with something that adds even more problems, security or otherwise, for the sake of convenience.

I hope everything works out super well for you, friend.


> These are people who should not be running systems.

I lol'd at that :) I was just trying to be more inclusive! Results have been mixed.

I do build images from scratch but I also recognize I am atypical (like many around HN). As i get older (and older) I realize In another time and place I would have been someone completely different -- but I got the internet era. not complaining. It's worked out ok.

I wish you the best as well friend!


I daily drive FreeBSD on my desktop with KDE. It is not as smooth as Linux and requires a little more tinkering compared to Linux. But I love it!

The killer features for me:

- The pf firewall. Rules you actually understand!

- Jails! When you cannot have Zones this will do.

- Native ZFS. Stable, mature, safe and with all the features you can dream of.

- Linuxulator. Binary compatibility with Linux if need be. Can be put in jail as well.

- pkg/ports. I really like it but I might have been indoctrinated.

- Networking stack. Good. Stable. Makes sense to me.

For a nice graphical UI Linux is more smooth but if you are willing to tinker it can work. As Linux gets all the attention you will see stuff such as Chromium lag behind.

I can understand that can scare people off. But FreeBSD feels like a comfortable old glove for me. I will suffer the minor holes. My beard has grayed and my hair line is non-existant.

If waiting for a laptop I would perhaps wait for FreeBSD 15 for much needed improvements in WIFI. If you want fast WIFI today you need weird hacks routing through a Linux VM[1]. It works rather well but it is honestly a bit clunky.

[1] https://github.com/pgj/freebsd-wifibox


I remember a hack, back in the day, on Linux where a Windows wifi driver was used via a thing called NDISwrapper. Be patient and hopefully you'll soon be looking back on your Linux VM bodge in the rear view mirror.


I haven't realised Ndiswrapper was deprecated in Linux. I thought I was too lucky with my WiFi cards in the last 10-15 years!


Wifi isn't quite solved on any platform. It is also quite hard to decide what solved really looks like!

My wife and I have identical HP laptops. Her's runs Arch (as you do), with KDE and mine runs Kubuntu 25.10 at the mo. Both use NetworkManager.

I look after both.

Randomly after wake up from suspend, wifi may or may not still be working. When I say random, I mean after a kernel update or the wind changes direction. I think wifies lappy is OK now because I seem to get a lot less "support" calls for the last few weeks.

To be fair, there are a lot of moving parts from a lot of bits of Linux involved in a modern distro these days.

When I say hard to decide what solved looks like: if Samba or SSSD crap out, is that wifi's fault or the kernel/driver? This is exactly what Windows has had to solve over the years and I do note things like credential managers and mounts that manage to survive disconnects being bolted on to Linux.

All that scrappy stuff needs to be passed on to the BSDs too. Getting a laptop with file systems that come and go, with a dickey clock tick and networking that comes and goes and VPNs and all the rest.

Getting all of that to work is quite a job.


So I suppose: NEVER use WiFi for anything critical (server, security, high avail, medical etc). I don't know if WiFi cards of later devices are of subpar quality to reduce costs. The WiFi debugging is insane.


Sounds exactly like my mom's Windows computer. Flaky wifi/power issues are not a Linux problem.



FreeBSD is worth using for native ZFS alone. BTRFS doesn't even come close.


It’s all OpenZFS now, same as Linux lmao.


Yes and no. OpenZFS on Linux still isn't as seamless and most distros still don't make it easy to do ZFS on root. Its definitely gotten better though.


I don't see any reason to use traditional ZFS on root when ZFSBootMenu is easier and better, also comparable to BSD's boot menus.


Is there a linux distro that makes the setup of zfsbootmenu easy? I found it to be quite a bit of work.


It's still out-of-tree though, isn't it?


> If you want fast WIFI today

Fast still means beyond 802.11g? (11n support is incomplete, last time I checked)

Because there is no corporate sponsor that needs good Wi-Fi drivers on FreeBSD, I doubt it will ever be better. I guess Sony, but it's all custom for them. I doubt there is anything to contribute back, even if Sony was open to that idea.


FreeBSD has 802.11ac.


But that is still for a limited number of chipsets though right? I would absolutely love to see way more support. I remember awhile back the FreeBSD Foundation putting some serious (on their scale of funds) funds to WiFi.


I can't remember which chipsets support it. But yes, the FreeBSD Foundation has been putting a lot of money into laptop support, including wifi improvements.


Can it actually work at 802.11ac speeds? I recall it maxing out at 22mbps.


I don't know exactly how fast it runs, and it will absolutely depend on your hardware and network, but I've definitely gotten more than that.


I guess it's better now then. To be clear, on linux the very same laptop in the very same location (dual boot) I got proper speed. The issue was entirely in FreeBSD drivers.


Yes, we've definitely had improvements recently. 14.3 was much faster than 14.2 and 15.0 is even faster.


I daily drive FreeBSD with IceWM, four screens 2@4k, 2@1080p running with Xorg on a Sapphire 5600XT, I can't fault any issues.


Exactly. When it works it is great.

I stick with a single 43" 4K@60 but it was a bit of a challenge to get on the happy path:

https://forums.freebsd.org/threads/intermittent-scanline-fli...

All systems can have issues. But the more widely used systems are at an advantage.


Are the screens directly connected or do you use a docking station? If docking station, does it require DisplayLink drivers to drive the monitors?


No docking station.

Although the 2x4k monitors are daisy chained. I was surprised when it worked first time.


If only there would be a resurgence of BSD. linux always feels like the javascript of OS world.


I'm glad I'm not the only person with similar feelings. I'm perfectly comfortable in Linux, but there's a certain ... uncanniness to it that's hard to pin down. FreeBSD (and, I suspect, the other BSDs as well) just feels more coherent.


After daily driving OpenBSD and FreeBSD, i can point the finger at the kernel subsystems that tries to handle everything under the sun, but with no clear direction and competing projects with different designs. Everything is three or more layers, each governed by a different team and interacting in opaque ways.

Meanwhile in the *BSD, you have the devices or some other OS concepts/subsystems, then a control layer with the associated management tools. Any other tool is either an alternate version, or a UI paint job.


Because it is. Linux doesn't really have a concrete idea of a "base system" like the BSDs do. Linux is more of a hodgepodge of components that are developed by different, and often a lot more isolated teams than we think, that all just gets integrated together. Which is truly an astonishing achievement of engineering, so I don't wanna seem like I am short selling it. Think of like the developer who work on gcc and the libc and the kernel, maybe some cross pollination, but not a lot. FreeBSD, the user land, kernel and even the libc team all happens under "1 roof."


if Linux is a JavaScript, then what is Windows? haha


IBM VisualAge Perl++


either bash, or one of those ridiculous mainframe languages from the 1960s with impossible-to-remember names


I believe the Jargon File has a couple of lyrics about IBM and JCL, to the tune of the Mickey Mouse Club theme.


What is the best (most reliable) way to run multiple Linux instances on a FreeBSD host?


Honestly, the problem is always the f!@#ing hardware, isn't it

The reason all this is hard is likely a remnant of what Microsoft did in the 1990s to the point where Non Windows OSes are given the shaft

Nvidia, Broadcom, Wifi generally, whatever


Oh yes, it /is/ the f!@#ing hardware. The core FreeBSD developers have taken their sweet time to add support for WiFi on anything IoT running FreeBSD. In other words — FreeBSD's core developers usually will not listen to users asking for such things unless maximum pressure gets applied in every separate instance. Disclaimer: I'm not a FreeBSD user. Apart from the halfway decent distros which use FreeBSD as their core OS, the FreeBSD developers in charge of FreeBSD itself will not add a GUI installer for some old school reason that really, only they would know of. One issue coming directly from this constraint is that if you run BSD through a VM — either on Linux or Windows it is rather difficult if not impossible to get past 1024 x 768 resolution without going through some major hoops. FreeBSD does not do a thorough job supporting VirtualBox instances, generally speaking. BSD is meant more for the back-end "bare metal" servers.


I'm glad this fits with my intuition

I think they assume people know what they're doing but a little x session never hurt anyone?


Sounds like Basilica of San Clemente[1][2]. One of the many many many "hidden" gems of Rome. Highly recommend visiting it!

Or you can go on a virtual tour[3]

[1] https://www.basilicasanclemente.com/eng/

[2] https://maps.app.goo.gl/zpXpQuxQLUvE5TLA9

[3] https://www.basilicasanclemente.com/eng/a-virtual-tour/


Thanks! And this is the article I remember https://www.exurbe.com/the-shape-of-rome/


Perfectly pure happiness:

"The answer to this is very simple. It was a joke. It had to be a number, an ordinary, smallish number, and I chose that one. Binary representations, base thirteen, Tibetan monks are all complete nonsense. I sat at my desk, stared into the garden and thought '42 will do' I typed it out. End of story." from the man himself[1]

...but let us not ruin a good story with the truth. Remember why earth was built. The "real" answer might then be flowing in the ether.

[1] https://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27...


Reasonable.

That seems to be the key word.

One camp argues: Expect nothing. Move on.

The other: Could they - with very little effort (reasonable) - have choosen a more palatable route.

There must be a middle ground between the nihilists and the pampered.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: