Hacker Newsnew | past | comments | ask | show | jobs | submit | hackyhacky's commentslogin

> The Linux kernel has eBPF now so if they wanted to start spying on everything you do they can just do it.

Sure, except that anyone can just compile a Linux kernel that doesn't allow that.

Anti-cheat systems on Windows work because Windows is hard(er) to tamper with.


Well yeah but then eBPF would not work and then the anti cheat could just show that it's not working and lock you out.

This isn't complicated.

Even the Crowdstrike falcon agent has switched to bpf because it lowers the risk that a kernel driver will brick downstream like what happened with windows that one time. I recently configured a corporate single sign on to simply not work if the bpf component was disabled.


Well but then attackers just compile a kernel with a rootkit that hides the hack and itself from the APIs of the BPF program, so it has to deal with that too or it's trivially bypassed.

Anticheat and antivirus are two similar but different games. It's very complicated.


The bpf api isn't the only telemetry source for an anti cheat module. There's a lot of other things you can look at. A bpf api showing blanks for known pid descendent trees would be a big red flag. You're right that it's very complicated but the toolchain is there if someone wanted to do the hard work of making an attempt. It's really telemetry forensics and what can you do if the cheat is external to the system.

The interesting solution here is secure boot, only allow users to play from a set of trusted kernels.

I'd be less antianticheat if I could just select the handcuffs at boot time for the rare occasion where I need them.

Although even then I'd still have qualms about paying for the creation of something that might pave the path for hardware vendors to work with authoritarian governments to restrict users to approved kernel builds. The potential harms are just not in the same league as whatever problems it might solve for gamers.


Once a slave, always a slave. Running an explicitly anti-user proprietary kernel module that does god-knows-what is not something I'd ever be willing to do, games be damned. It might just inject exploits into all of your binaries and you'd be none the wiser. Since it wouldn't work on VMs you'd have to use a dedicated physical machine for it. Seems to high of a price to play just a few games.

What if the kernel module is only run in a separate VM than your main one?

Games that require kernel-level anticheat will probably try to detect VMs and refuse to run.

The idea is that the hypervisor would also be signed and provide security guarantees to games to block cheats from working.

Being able to snapshot and restore memory is a pretty common feature across all decent hypervisors. That in and of itself enables most client-side cheats. I doubt they'd bother to provide such a hypervisor for the vanishingly small intersection of people who:

- Want to play these adversarial games

- Don't care about compromising control of hypervisor

- Don't simply have a dedicated gaming box


>Being able to snapshot and restore memory is a pretty common feature across all decent hypervisors

A hypervisor that protects against this already exists for Linux with Android's pKVM. Android properly enforces isolation between all guests.

Desktop Linux distros are way behind in terms of security compared to Android. If desktop Linux users ever want L1 DRM to work to get access to high resolution movies and such they are going to need such a hypervisor. This is not a niche use case.


It "protects" against this given the user already does not control the hypervisor, at which point all bets are off with regard to your rights anyway. It's actually worse than Windows in this regard.

I would never use a computer I don't have full control over as my main desktop, especially not to satisfy an external party's desire for control. It seems a lot more convenient to just use a separate machine.

Even mainstream consumers are getting tired of DRM crap ruining their games and movies. I doubt there is a significant Linux users would actually want to compromise their ownership of the computer just to watch movies or play games.

I do agree that Linux userland security is lackluster though. Flatpak seems to be a neat advancement, at least in regard to stopping things from basically uploading your filesystems. There is already a lot of kernel interfaces that can do this like user namespaces. I wish someone would come up with something like QubesOS, but making use of containers instead of VMs and Wayland proxies for better performance.


You already don't control the firmware on the CPU. Would you be okay with this if the hypervisor was moved into the firmware of the CPU and other components instead?

I honestly think you would be content as long as the computer offered the ability to host an arbitrary operating system just like has always been possible. Just because there may be an optional guest running that you can't fully control that doesn't take away from the ability to have an arbitrary guest you can fully customize.

>to satisfy an external party's desire for control.

The external party is reflecting the average consumer's demand for there not being cheaters in the game they are playing.

>It seems a lot more convenient to just use a separate machine.

It really isn't. It's much more convenient to launch a game on the computer you are already using than going to a separate one.


Ah, I see, you're talking about Intel ME/AMD PSP? That's unfortunate and I'm obviously not happy with it, but so far there seems to be no evidence of it being abused against normal users.

It's a little funny that the two interests of adtech are colliding a bit here: They want maximum control and data collection, but implementing control in a palatable way (like you describe) would limit their data collection abilities.

My answer to your question: No, I don't like it at all, even if I fully trust the hypervisor. It will reduce the barrier for implementing all kinds of anti-user technologies. If that were possible, it will quickly be required to interact with everything, and your arbitrary guest will soon be pretty useless, just like the "integrity" bullshit on Android. Yeah you can boot your rooted AOSP, but good luck interacting with banks, government services (often required by law!!), etc. That's still a net minus compared to the status quo.

In general, I dislike any methods that try to apply an arbitrary set of criteria to entitle you to a "free" service to prevent "abuse", be it captchas, play integrity, or Altman's worldcoin. That "abuse" is just rational behavior from misaligned incentives, because non-market mechanisms like this are fundamentally flawed and there is always a large incentive to exploit it. They want to have their cake and eat it too, by eating your cake. I don't want to let them have their way.

> The external party is reflecting the average consumer's demand for there not being cheaters in the game they are playing.

Pretty sure we already have enough technology to fully automate many games with robotics. If there is a will, there is a way. As with everything else on the internet, everyone you don't know will be considered untrusted by default. Not the happiest outcome, but I prefer it to losing general purpose computing.


>you're talking about Intel ME/AMD PSP?

I'm talking about the entire chip. You are unable to implement a new instruction for the CPU for example. Only Intel or AMD can do so. You already don't have full control over the CPU. You only have as much control as the documentation for the computer gives you. The idea of full control is not a real thing and it is not necessary for a computer to be useful or accomplish what you want.

>and your arbitrary guest will soon be pretty useless

If software doesn't want to support insecure guests, the option is between being unable to use it, or being able to use it in a secure guest. Your entire computer will become useless without the secure guest.

>Yeah you can boot your rooted AOSP, but good luck interacting with banks, government services (often required by law!!), etc.

This could be handled by also running another guest that was supported by those app developers that provide the required security requirements compared to your arbitrary one.

>That "abuse" is just rational behavior from misaligned incentives

Often these can't be fixed or would result in a poor user experience for everyone due to a few bad actors. If your answer is to just not build the app in the first place, that is not a satisfying answer. It's a net positive to be able to do things like watch movies for free on YouTube. It's beneficial for all parties. I don't think it is in anyone's best interest to not do such a thing because there isn't a proper market incentive in place stop people from ripping the movie.

>If there is a will, there is a way.

The goal of anticheat is to minimize customer frustration caused due to cheaters. It can still be successful even if it technically does not stop every possible cheat.

>general purpose computing

General purpose computing will always be possible. It just will no longer be the wild west anymore where there was no security and every program could mess with every other program. Within a program's own context it is able still do whatever it wants, you can implement a Turing machine (bar the infinite memory).


> Intel or AMD

They certainly aren't perfect, but they don't seem to be hell-bent on spying on or shoving crap into my face every waking hour for the time being.

> insecure guests

"Insecure" for the program against the user. It's such a dystopian idea that I don't know what to respond with.

> required security requirements

I don't believe any external party has the right to require me to use my own property in a certain way. This ends freedom as we know it. The most immediate consequences is we'd be subject to more ads with no way to opt out, but that would just be the beginning.

> stop people from ripping the movie

This is physically impossible anyway. There's always the analog hole, recording screens, etc, and I'm sure AI denoising will close the gap in quality.

> it technically does not stop every possible cheat

The bar gets lower by the day with locally deployable AI. We'd lose all this freedom for nothing at the end of the day. If you don't want cheating, the game needs to be played in a supervised context, just like how students take exams or sports competitions have referees.

And these are my concerns with your ideal "hypervisor" provided by a benevolent party. In this world we live in, the hypervisor is provided by the same people who don't want you to have any control whatsoever, and would probably inject ads/backdoors/telemetry into your "free" guest anyway. After all, they've gotten away with worse.


>"Insecure" for the program against the user.

We already tried out trusting the users and it turns out that a few bad apples can spoil the bunch.

>It's such a dystopian idea that I don't know what to respond with.

Plenty of other devices are designed so that you can only use it in safe ways the designer intends. For example a microwave won't function while the door is open. This is not dystopia despite potentially going against what the user wants to be able to do.

>I don't believe any external party has the right to require me to use my own property in a certain way.

And companies are not obligated to support running on your custom modified property.

>The bar gets lower by the day with locally deployable AI.

The bar at least can be raised from searching "free hacks" and double clicking the cheat exe.

>who don't want you to have any control whatsoever

This isn't true. These systems offer plenty of control, but they are just designed in a way that security actually exists and can't be easily bypassed.

>and would probably inject ads/backdoors/telemetry into your "free" guest anyway.

This is very unlikely. It is unsupported speculation.


Yep, a plenty of prior art on how to implement the necessary attestations. Valve could totally ship their boxes with support for anticheat kernel-attestation.

Is it possible to do this in a relatively hardware-agnostic, but reliable manner? Probably not.


What do you mean? Ship computer with preinstalled Linux that you can't tamper? Sounds like Android. For ordinary computers, secure boot is fully configurable, so it won't work: I can disable it, I can install my own keys, etc. Any for any userspace way to check it I'll fool you, if I own the kernel.

No, just have the anti-cheat trust kernels signed by the major Linux vendors and use secure boot with remote attestation. Remote attestation can't be fooled from kernel space, that's the entire point of the technology.

That way you could use an official kernel from Fedora, Ubuntu, Debian, Arch etc. A custom one wouldn't be supported but that's significantly better than blocking things universally.


You can't implement remote attestation without a full chain of exploits (from the perspective of the user). Remote attestation works on Android because there is dedicated hardware to directly establish communication with Google's servers that runs independent (as a backchannel). There is no such hardware in PCs. Software based attestation is easily fooled on previous Android/Linux.

The call asks the TPM to display the signed boot chain, you can't fake that because it wouldnt be cryptographically valid. The TPM is that independent hardware.

How would that be implemented? I'd be curious to know.

I'm not aware that a TPM is capable of hiding a key without the OS being able to access/unseal it at some point. It can display a signed boot chain but what would it be signed with?

If it's not signed with a key out of the reach of the system, you can always implement a fake driver pretty easily to spoof it.


I guess something like that: https://tpm2-software.github.io/tpm2-tss/getting-started/201...

Basically TPM includes key that's also signed with manufacturer key. You can't just extract it and signature ensures that this key is "trusted". When asked, TPM will return boot chain (including bootloader or UKI hash), signed by its own key which you can present to remote party. The whole protocol is more complicated and includes challenge.


Tpm isn't designed for this use case. You can use it for disk encryption or for identity attestation but step 1 for id attestation is asking the tpm to generate a key and then trusting that fingerprint from then on after doing a test sign with a binary blob. The running kernel is just a binary that can be hashed and whitelisted by a user space application. Don't need tpm for that.

This is called the Endorsement Key, and you're correct, it never leaves the TPM. The TPM is a "black box" to the OS.

Ah, got it. With enough motivation this is still pretty easily defeated though. The key is in some kind of NVRAM, which can be read with specialized equipment, and once it's out, you can use it to spoof signatures on a different machine and cheat as usual. The TPM implementations of a lot of consumer hardware is also rather questionable.

These attestation methods would probably work well enough if you pin a specific key like for a hardened anti-evil-maid setup in a colo, but I doubt it'd work if it trusts a large number of vendor keys by default.


Once it's out you could but EKs are unique and tied to hardware. Using an EK to sign a boot state on hardware that doesn't match is a flag to an anti-cheat tool, and would only ever work for one person.

It also means that if you do get banned for any reason (obvious cheating) they then ban the EK and you need to go source more hardware.

It's not perfect but it raises the bar significantly for cheaters to the point that they don't bother.


> Using an EK to sign a boot state on hardware that doesn't match is a flag to an anti-cheat tool

The idea is you implement a fake driver to sign whatever message you want and totally faking your hardware list too. As long as they are relatively similar models I doubt there's a good way to tell.

Yeah, I think there are much easier ways to cheat at this point, like robotics/special hardware, so it probably does raise the bar.


Any sane scheme would whitelist TPM implementations. Anyway fTPMs are a thing now which would ultimately tie the underlying security of the anticheat to the CPU manufacturer.


I wonder if you could use check-point and restore in userspace (https://criu.org/Main_Page) so that after the game boots and passes the checks on a valid system you can move it to an "invalid" system (where you have all the mods and all the tools to tamper with it).

I don't really care about games, but i do care about messing up people and companies that do such heinous crimes against humanity (kernel-level anti-cheat).


The war is lost. The most popular game that refuses to use kernel-level anti-cheat is Valve's Counter-Strike 2, so the community implemented it themselves (FaceIT) and requires it for the competitive scene.

You can switch out the kernel in the running Linux desktop.

Uh, you'd have to compile a Kernel that doesn't allow it while claiming it does ... And behaves as if it does - otherwise you'd just fail the check, no?

I feel like this is way overstated, it's not that easy to do, and could conceptually be done on windows too via hardware simulation/virtual machines. Both would require significant investments in development to pull of


Right, the very thing that works against AC on Linux also works for it. There are multiple layers (don't forget Wine/Proton) to inject a cheat, but those same layers could also be exploited to detect cheats (especially adding fingerprints over time and issuing massive ban-waves).

And then you have BasicallyHomeless on YouTube who is stimulating nerves and using actuators to "cheat." With the likes of the RP2040, even something like an aim-correcting mouse becomes completely cheap and trivial. There is a sweet-spot for AC and I feel like kernel-level might be a bit too far.


All it takes is going to cd usr src linux and running make menuconfig. Turning off a few build flags. Hitting save. And then running make to recompile. But that's like saying "well if I remove a fat32 support I can't use fat32". Yea it will lock you out showing you have it disabled. No big deal.

Neither can CEOs. What's your point?

They can. AI can’t automate anything.

> CEO's at least in the USA have multiple legal obligations under federal law.

Lots of people have legal obligations.

In this case, I assume that in this case you're referring to a fiduciary duty (i.e. to act in the best interests of the company), which is typically held not by the CEO, but but by the directors.

Ultimately the responsibility to assign daily operation of the company rests with the board, both legally and practically, as does the decision to use a human or AI CEO.


> The entire job is almost entirely human to human tasks: sales, networking, leading, etc.

So, writing emails?

"Hey, ChatGPT. Write a business strategy for our widget company. Then, draft emails to each department with instructions for implementing that strategy."

There, I just saved you $20 million.


People seem to have a poor model of what management and many knowledge workers. Much of it isn't completing tasks, but identifying and creating them.

> Much of it isn't completing tasks, but identifying and creating them.

They failed miserably in the Automotive industry in Europe. The only thing that they identified was: "Shit, the profits are falling, do something"


"ChatGPT, please identify the tasks that a CEO of this company must do."

I get your point but if you think that list of critical functions (or the unlisted "good ol boys" style tasks) boils down to some emails then I think you don't have an appreciation for the work or finesse or charisma required.

> I think you don't have an appreciation for the work or finesse or charisma required.

I think that you don't appreciate that charismatic emails are one of the few things that modern AI can do better than humans.

I wouldn't trust ChatGPT to do my math homework, but I would trust it to write a great op-ed piece.


For some reason the AI prompt "make me 20 million" hasn't been working for me. What am I doing wrong?

Have you got that plan reviewed by your analysts and handed over to implement by your employees? You may be missing those steps...

Automation depends on first getting paid to do something.

We could solve that by replacing all CEOs to remove the issue of finesse and charisma. LLMs can then discuss the actual proposals. (not entirely kidding)

It would be actually nicely self reinforcing and resisting a change back, because now it's in board's interest to use an LLM which cannot be smoothtalked into bad deals. Charisma becomes the negative signal and excludes more and more people.


Why are there "good ol boys" tasks in the first place? Instead, automate the C-suite with AI, get rid of these backroom dealings and exclusive private networks, and participate in a purer free market based on data. Isn't this what all the tech libertarians who are pushing AI are aiming for anyways? Complete automation of their workforces, free markets, etc etc? Makes more sense to cut the fat from the top first, as it's orders of magnitude larger than the fat on the bottom.

A more fair, less corrupt system/market sounds great! I also think once we solve that tiny problem that the "should ai do ceo jobs" problem is way easier!

What should we do while we wait for the good ol boys networks to dismantle themselves?

On a more serious note, the meritocracy, freedom, data, etc that big tech libertarians talk about seems to mostly be marketing. When we look at actions instead it's just more bog standard price fixing, insider deals, regulatory capture, bribes and other corruption with a little "create a fake government agency and gut the agencies investigating my companies" thrown in to keep things exciting.


> There, I just saved you $20 million.

If it were this easy, you could have done it by now. Have you?


> If it were this easy, you could have done it by now. Have you?

In order to save $20 million dollars with this technique, the first step is to hire a CEO who gets paid $20 million dollars. The second step is to replace the CEO with a bot.

I confess that I have not yet completed the first step.


Have you replaced the executive function in any one of your enterprises with ChatGPT?

I have completely replaced management of every company that I own with ChatGPT.

0 x 0 = 0 I guess?

How have they scaled?

This is literally a caricature of what the average HN engineer thinks a businessperson or CEO does all day, like you can't make satire like this up better if you tried.

It's mind-boggling. I get riffing on the hyped superiority of CEOs. I've heard inane things said by them. But, being a human being with some experience observing other humans and power structures, I can assuredly say that the tight-knit group of wealthy power-brokers who operate on gut and bullshitting each other (and everyone) will not cede their power to AI, but use it as a tool.

Or maybe the person you're describing is right, and CEOs are just like a psy-rock band with a Macbook trying out some tunes hoping they make it big on Spotify.


Do you think CEOs have an accurate idea of what engineers do?

Neither side can truly know, that is the nature of a diffuse organization.

That won't stop them from replacing us.

Even if the AI gets infinitely good, the task of guiding it to create software for the use of other humans is called...software engineering. Therefore, SWEs will never go away, because humans do not know what they want, and they never will until they do.

I am sympathetic to your point, but reducing a complex social exchange like that down to 'writing emails' is wildly underestimating the problem. In any negotiation, it's essential to have an internal model of the other party. If you can't predict reactions you don't know which actions to take. I am not at all convinced any modern AI would be up to that task. Once one exists that is I think we stop being in charge of our little corner of the galaxy.

Artists, musicians, scientists, lawyers and programmers have all argued that the irreducible complexity of their jobs makes automation by AI impossible and all have been proven wrong to some degree. I see no reason why CEOs should be the exception.

Although I think it's more likely that we're going to enter an era of fully autonomous corporations, and the position of "CEO" will simply no longer exist except as a machine-to-machine protocol.


The one big reason why CEOs exist is trust. Trust from the shareholders that someone at the company is trying to achieve gains for them. Trust from vendors/customers that someone at the company is trying to make a good product. Trust from the employees that someone is trying to bring in the money to the company (even if it doesn't come to them eventually).

And that trust can only be a person who is innately human, because the AI will make decisions which are holistically good and not specifically directed towards the above goals. And if some of the above goals are in conflict, then the CEO will make decisions which benefit the more powerful group because of an innately uncontrollable reward function, which is not true of AI by design.


> The one big reason why CEOs exist is trust.

This sounds a lot like the specious argument that only humans can create "art", despite copious evidence to the contrary.

You know what builds trust? A history of positive results. If AIs perform well in a certain task, then people will trust them to complete it.

> Trust from vendors/customers that someone at the company is trying to make a good product.

I can assure you that I, as a consumer, have absolutely no truth in any CEO that they are trying to making a good product. Their job is to make money, and making a good product is merely a potential side-effect.


I feel like the people who can't comprehend the difficulties of an AI CEO are people who have never been in business sales or high level strategy and negotiating.

You can't think of a single difference in the nature of the job of artist/musician vs. lawyer vs. business executive?


> I feel like the people who can't comprehend the difficulties of an AI <thing doer> are people who have never <tried to do that thing really well>.

That applies to every call to replace jobs with current-gen AI.

But I can't think of a difference between CEOs and other professions that works out in favor of keeping the CEOs over the rest.


You sound like a CEO desperately trying not to get fired.

Everyone is indispensable until they aren't.


This whole thread is delightful. Well done.

Alas, this doesn't answer the question I posed.

CEOs are a different class of worker, with a different set of customers, a smaller pool of workers. They operate with a different set of rules than music creation or coding, and they sit at the top of the economy. They will use AI as a tool. Someone will sit at the top of a company. What would you call them?


>You can't think of a single difference in the nature of the job of artist/musician vs. lawyer vs. business executive?

I can think of plenty, but none that matter.

As the AI stans say, there is nothing special about being human. What is a "CEO?" Just a closed system of inputs and outputs, stimulus and response, encased in wetware. A physical system that like all physical systems can be automated and will be automated in time.


My assertion is that it's a small club of incredibly powerful people operating in a system of very human rules - not well defined structures like programming, or to a lesser extent, law.

The market they serve is themselves and powerful shareholders. They don't serve finicky consumers that have dozens of low-friction alternatives, like they do in AI slop Youtube videos, or logo generation for their new business.

A human at some point is at the top of the pyramid. Will CEOs be finding the best way to use AI to serve their agenda? They'd be foolish not to. But if you "replace the CEO", then the person below that is effectively the CEO.


They've been proven wrong? I'm not sure I've seen an LLM that does anything beyond the most basic rote boilerplate for any of these. I don't think any of these professions have been replaced at all?

> most survey respondents don't even _understand_ what AI is doing, so I am a bit skeptical to trust their opinions on whether it will cause harm

Why do they need to know how AI works, in order to know that it is already having a negative effect on their lives and is likely to do so in the future?

I don't understand how PFAS [1] work, but I know I don't want them in my drinking water.

[1] https://www.niehs.nih.gov/health/topics/agents/pfc


> Why do they need to know how AI works, in order to know that it is already having a negative effect on their lives and is likely to do so in the future?

Because otherwise you might not actually be properly attributing the harm you're seeing to the right thing. Lots of people in the US thing current problems are left/right or socialist/authority, while it's obviously a class issue. But if you're unable to actually take a step back and see things, you'll attributed the reasons why you're suffering.

I walked around on this earth for decades thinking Teflon is a really harmful material, until this year for some reason I learned that Teflon is actually a very inert polymer that doesn't react with anything in our bodies. I've avoided Teflon pans and stuff just because of my misunderstanding if this thing is dangerous or not to my body. Sure, this is a relatively trivial example, but I'm sure your imagination can see how this concept has broader implications.


> Close to 2/3 Americans also believe in magic so I'm not sure what these studies are supposed to tell us.

I think you're missing the point, as are many other comments on this post saying effectively, "These people don't even understand how AI works, so they can't make good predictions!"

It's true that most people can't make accurate predictions about AI, but this study is interesting because it represents people's current opinion, not future fact.

Right now, people are already distrustful of AI. This means that if you want people to adapt it, you need to persuade them otherwise. So far, most people's interactions with AI are limited to cheesy fake internet videos, deceptive memes, and the risk of shrinking labor demand.

In its short tenure in the public sphere, large language models have contributed nothing positive, except for (a) senior coders who can offload part of their job to Claude, and (b) managers, who can cut their workforce.

Why would people hold AI in high esteem?


> risk of shrinking labor demand

Yet this is a primary goal of AI. Problem is that the way how the dominant economic system is structured, reduction of said demand increasingly leads to a societal crash.


hackyhacky says >"...most people can't make accurate predictions about AI..."<

"...no one can make accurate predictions about AI..."

FTFY


> You guys are so fucked.

"You guys"? Everyone is fucked. This is going to be everywhere. Coming to your neighborhood, eventually.


Not everyone lives in a 3rd world authoritarian backwater, its time to stop that ridiculous US-centrism

Sure, not everyone. Just the US, the UK, France, Germany, Poland, Hungary, Italy, Slovakia, Netherlands, Chile, Argentina, or Honduras. I'm sure it will pass.

It's not US-centrism. It's just an acknowledgement of the recent trend.


I dont live in a police state.

You either don't have police reports or some amount of your country's police reports aee written by AI.

I'd be more worried that you aren't reading articles about it than if you were.


Considering that AI can barely write in my native language I am not worried.

There are countries on this planet that are not actively digging their own graves.


Cmon tell us, Mr. Rammstein’s throwaway, which much-superior country is it?!

He wont tell you. If he did he would have to admit he lives in a police state or martial law.

Misery loves company

a/s/l?

I guess that means you don't live in the US, or in the UK, or in Australia.

Correct

Don't forget about France, Germany, Poland, Hungary, Italy, Slovakia, Netherlands, Chile, Argentina, or Honduras. Right-wing authoritarianism is not limited to the anglosphere.

Haha yeah I didn't want to get into an exhaustive list

The NATO thing is justification that even Russia has not applied consistently. Putin is on record saying that Ukraine is part of the Russian sphere of influence, which means, according to him, they get to install their crony of choice. If NATO was their real concern, they could withdraw now in exchange for promises not to join NATO, but they also refuse to give up territory they've occupied or to allow any security guarantees from the west, all but setting up the next stage of their invasion.

> The US has to stop. The US is not the world's policeman, and the US had no legitimate right to declare itself such.

The US has the largest military on the planet, and the (relative) peace of the last 80 years is largely based on a credible threat of our willingness to use it. That power can be used for good; at the moment, we are simply not choosing to do so.


Was dropping two atomic bombs on civilian populations good? Was the US's role in the Korean war good? Was the US's intervention in the Chinese civil war good? Was the US's massacre of Puerto Rican freedom fighters, nationalists, and independence-seeking rebels during the Jayuya uprising good? Was the US's invasion of Vietnam good? Was the US's covert military operations in Laos using the paramilitary arm of the CIA good? Was the US's overthrow of the legitimately elected leader of Iran to install a US puppet good? Was the US's actions to destabilize a laundry list of Latin American countries to seize control of raw materials and commodity production and place it under American corporations good? Was the US's invasion of the Dominican Republic to quell mass democratic uprisings against a military coup that seized control from a democratically elected leader good? Were the US secret bombing campaigns against Laos and Cambodia good? Was the US invasion of Grenada good? Were the US's attacks against Iranian-owned offshore oil drilling platforms good? Was the US occupation of Panama good? Was the US invasion of Iraq good? Was the US bombing of Serbia good? Was the US invasion and occupation of Afghanistan good? How about the drone strikes against civilian weddings - good? How about illegal humans-rights-violating extrajudicial rendition, detention, and torture programs, good?

Is this all "peace"? Is intentional mass murder of civilians "good" when we do it? Is Trump the first president to abuse US military capabilities in the last 80 years, or are you being selective and partisan in your recollection of one of the world's most prolific purveyors of incomprehensible violence against civilians, interference in the democratic processes of other nations, and violators of human rights in the last century?

We're getting far off track from the important point here though, which is that the US should not invade Venezuela, just as Russia should not have invaded Ukraine (the latter being a point of comparison for the former, not the subject of the conversation).


The answer to your question "are these last 80 years really peaceful?" is yes, in context. Look at the horror of the world wars, or the preceding ~1000 years of barbarity and wide-scale religious wars. The US does not always use its power wisely, but the alternative is to cede that power to someone else: nature abhors a power vacuum.

Modern anti-vaccine nuts have spent so long living without measles that they've forgotten the good that vaccines do and take their good health for granted. Anti-US-power nuts have lived in a world largely without large-scale conflicts, held in place by our NATO allies and the credible threat of force, and you've forgotten what a world without that stabilizing effect looks like. Spoiler: it looks like the 30 year war but with nukes.


I'm not "Anti-US-power", I'm anti-genocide, anti-terrorism, anti-war-crime, anti-torture, anti-invading-sovereign-nations, and pro-democracy. It's not my fault that the US has systemically made deliberate attacks against civilians, war crimes, rampant human rights abuses, invasions of sovereign nations, and overthrowing of democratically elected leaders the basis for US foreign policy and military doctrine for the last century or so.

When you ask me to look at the horror of the world wars, does that include the horror of the only country to ever use atomic weapons in conflict deliberately dropping them on cities they knew were full of civilians? If that's what the American version of "peace" looks like, I'm not interested in the American version of "peace". The Soviet Union never deliberately nuked New York. China never deliberately nuked Taipei. North Korea never deliberately nuked Seoul. Iran never deliberately nuked Jerusalem.

You propose a hypothetical future where you guess that a world without US "stability" involves nuclear weapons, while ignoring the fact that the world with US "stability" already involved them. History speaks louder than hypothesis.


There has never been a time in history devoid of crime, torture, genocide, and authoritarianism. But the last 80 years have seen those things at a low ebb in favor of democracy and peace.

Please tell me what the last 80 years would have looked like with an isolationist US, weak or no NATO, and an unimpeded ascent of dictatorial regimes. Answer: even more of all those things you purport to hate.

The US is the worst superpower, except for all the other ones. The choice is not between good and evil, it's between evil and less evil. (Just like presidential elections.) Don't be naive and empower the greater evil just because you're displeased with the lesser.

I'm not saying you should stop pressuring the US to act morally, but asking it to leave a power vacuum is dangerous.

Edit: typo


Is saying "The US should not invade Venezuela" asking the US to leave a power vacuum? Because that's been the only assertion I've made in this entire conversation about what the US should do, as opposed to what it has already done.

Imagine thinking that "isolationist" means not repeatedly deposing other countries' governments.

"Isolationist" means the US abandoning NATO and our allies, which is what the current "America First" movement supports, and is exactly what Russia and China want.

Certainly not. Police aren't supposed to help the criminals

> How do you expect things to ever change if no one ever updates?

Maybe things should stop changing.

We don't really need ten new CSS attributes every year. Things work. The elegant solution is to announce the project is done. That would bring some much-needed stability. Then we can focus on keeping things working.


The issue with this is that the browser is the cross-playing operating system, the VM that runs webapps. But we treat the platform like an evolving document format. If we want to declare it complete, we need to make it extensible so we can have a stable core without freezing capabilities. I foresee all of this CSS/HTML stuff as eventually being declared a sort of legacy format and adding a standard way to ship pluggable rendering engines/language runtimes. WASM is one step in that direction. There are custom rendering/layout engines now, but they basically have to render to canvas and lose a lot of performance and platform integration. Proper official support for such engines with hooks into accessibility features and the like could close that gap. Of course, then you have every website shipping a while OS userland for every pageload, kinda like containers on servers, but that overhead could probably be mitigated with some caching of tagged dependencies. Then you have unscrupulous types who might use load timings to detect cache state for user profiling... I'm sure there's a better solution for that than just disabling cross-site caching...

I digress.


> I foresee all of this CSS/HTML stuff as eventually being declared a sort of legacy format and adding a standard way to ship pluggable rendering engines/language runtimes.

I doubt this is going to happen as long as backwards compatibility continues to be W3C's north star. That's why all current browsers can still render the first website created by TBL in 1989.

Sure, official support for certain extensions should happen but HTML/CSS will always be at the core.


11 years ago we had Python 2.7.8 and 3.4.0 so no type hints, no async await, no match syntax, no formatted string literals, large number couldn’t be written like this 13_370_000_000, etc.

Developers deserve nice things.


> Developers deserve nice things.

I agree they do. But Python is a bad counterexample. You can upgrade your Python on your server and no one has to know about it. But if you want to use new CSS features, then every browser has to implement that feature and every user has to upgrade their browser.

The intent of my comment was to express a desire to stabilize the web API in particular, not to freeze all software development in its tracks.


But people ship python software, just like they ship CSS software, and python is bundled in many operating systems. When somebody ships e.g. a CLI tool to manipulate subtitle files, and it uses a language feature from python 3.9, that somebody is excluding you from running it on your 11 year old system.

People get new browser versions for free, there are more important things to thing about than users that for some reason don‘t want to upgrade. Like I would rather have my layout done quickly with nice elegant code (and no hacks) and spend my extra time developing an excellent UX for my users that rely on assistive technology.

Note that your wish for stabilization was delivered by the CSSWG with the @supports rule. Now developers can use new features without breaking things for users on older browser. So if a developer wants to use `display: grid-lanes` they can put it in an @supports clause. However if you are running firefox 45 (released in May 2016; used by 0.09% of the global users) @supports will not work and my site will not work on your browser. I—and most developers—usually don’t put things in an @support clause that passes "last 2 version, not dead, > 0.2%"


> Maybe things should stop changing.

There are two kinds of technologies: those that change to meet user needs, and those that have decided to start dying and being replaced by technologies that change to meet user needs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: