Hacker Newsnew | past | comments | ask | show | jobs | submit | sciencejerk's commentslogin

Your brother's livelihood is not safe from AI, nor is any other livelihood. A small slice of lucky, smart, well-placed, protected individuals will benefit from AI, and I presume many unlucky people with substantial disabilities or living in poverty will benefit as well. Technology seems to continue the improve the outcomes at the very top and very bottom, while sacrificing the biggest group in the middle. Many HN Software Engineers here immensely benefitted from Big Tech over the past 15 years -- they were a part of that lucky privileged group winning 300k+ USD salaries plus equity for a long time. AI has completely disrupted this space and drastically decreased the value of their work, and it largely did this by stealing open source code for training data. These Software Engineers are right to feel upset and threatened and oppose these AI tools, since they are their replacement. I believe that is why you see so much AI hate in HN

Blue collar work won't be safe for long. Just longer.

Agreed, agent scaling and orchestration indicates that demand for compute is going to blow up, if it hasn't already. The rationale for building all those datacenters they can't build fast enough is finally making sense.

Thank God for the EU regulations. USA has been too lax about cracking down on anti-competitive market practices

No you wouldn't allow updates with Notepad++

This is far too cynical of a take. LittleSnitch might not save you from well-established malware on your machine, but it will certainly hamper attempts to get payloads and exploits on your machine in the first place

LittleSnitch is great for MacOS; it is easily configured to alert you every time your machine makes ip/domain connections, which can then be accepted, denied, or rules made

> LittleSnitch is great for MacOS; it is easily configured to alert you every time your machine makes ip/domain connections, which can then be accepted, denied, or rules made

For an open-source alternative, consider checking out - Lulu [0]. It's not as feature rich nor has impressive UI like the former but gets the main work done.

[0] https://github.com/objective-see/LuLu


It's not open source, but I can also recommend Vallum[0] as a cheaper alternative to LittleSnitch.

[0] https://www.vallumfirewall.com/


I use LuLu, it works great. Its kept my older versions of Photoshop and Acrobat from complaining and showing me ads for newer versions for the last couple years!

Tossing in a suggestion for Vallum[0] here. It's not FOSS but very polished and a fraction of the cost of Little Snitch.

[0]: https://vallumfirewall.com/


Are you for realy using apple products? Yuh...

Binisoft WFC for Windows is a free outbound firewall. It was acquired by MalwareBytes awhile back, but they have not interfered with development so far.

https://www.binisoft.org/wfc.php

It has some areas where improvement is needed, but the fundamentals work and the user interface design is decent.

I am surprised it's not more popular for Windows users. All of the alternatives I've tried have critical issues which made me dismiss them as unserious.


Yeah I've been using Fort on windows, it's easy to use and not closed source and full of bloat like the commonly suggested windows firewalls from various security companies.

Why is this happening?

They're "optimizing" costs wherever possible - reducing compute allocations, quantizing models, doing whatever they can to reduce the cost per token, but vehemently insisting that no such things are occurring, that it's all in the users' heads, and using the weaseliest of corporate weasel speak to explain what's happening. They insist it's not happening, then they say something like "oh, it happened but it was an accident", then they say "yes, it's happening, but it's actually good!" and "we serve the same model day by day, and we've always been at war with Eastasia."

They should be transparent and tell customers that they're trying to not lose money, but that'd entail telling people why they're paying for service they're not getting. I suspect it's probably not legal to do a bait and switch like that, but this is pretty novel legal territory.


I have absolutely no insight knowledge, but I think it's not a bad assumption to have that, it's costly to run the models, when they release a new model they assume that cost and give per user more raw power, when they've captured the new users and wow factor, they start reducing costs by reducing the capacity they provide to users. Rinse and repeat.

That is absolutely scummy.

There are frequently claims that Anthropic is somehow diluting or dumbing down models in some subtle way. Unfortunately it’s tough to validate these claims without a body of regularly checked evals. This test set should hopefully help settle whether Anthropic is actually making changes under the hood or whether the changes are all in people’s heads.


>>> We never reduce model quality due to demand, time of day, or server load. The problems our users reported were due to infrastructure bugs alone.

Just ignore the continual degradation of service day over day, long after the "infrastructure bugs" have reportedly been solved.

Oh, and I've got a bridge in Brooklyn to sell ya, it's a great deal!


> We never reduce model quality due to demand, time of day, or server load

Forgive me, but as a native English speaker, this sentence says exactly one thing to me; We _do_ reduce model quality, just not for these listed reasons.

If they don't do it, they could put a full stop after the fifth word and save some ~~tokens~~ time.


Yes, Dario is responsible for some of the weaseliest of corporate weasel wording I've ever seen, and he's got some incredible competition in that arena. Those things aren't the reason, they're just strongly coincidental with the actual reason, which is to slow the burn rate and extend the runway.

Moreover the assurance re model quality is not re results quality.

It’s entirely possible it’s not happening, and this phenomenon of “model degradation” is just user hype meeting reality.

They wouldn't do it "intentionally". It would be an mistake "accidentally" made by a Developer or AI, that under the right conditions allows Zoom employees, etc arbitrary file reads on the host...


Now imagine it spoken by Cortana from the Halo series for the full effect


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: