I hate the classic apple users' "mom" argument. Why are all your moms morons? And why do you want to fuck up the entire mobile landscape to baby proof it for them. Im not gonna ruin my experience with technology because you dont expect your mom to be able to wipe her ass without apple's help
The harder it is to do, the more the targets guard will be down
In this case, sending your malicious image through a fake email might get flagged, or even not opened by someone whos been trained in infosec enough to be suspicious of these things. But a tracking pixel in an email that is verifiably from a trusted entity will be opened no problem. Type of thing that will look pretty slick if you read about it being used
It works for hard problems when the person already solves it and just needs the grunt work done
It also works for problems that have been solved a thousand times before, which impresses people and makes them think it is actually solving those problems
Which matches what they are. They're first and foremost pattern recognition engines extraordinaire. If they can identify some pattern that's out of whack in your code compared to something in the training data, or a bug that is similar to others that have been fixed in their training set, they can usually thwack those patterns over to your latent space and clean up the residuals. If comparing pattern matching alone, they are superhuman, significantly.
"Reasoning", however, is a feature that has been bolted on with a hacksaw and duct tape. Their ability to pattern match makes reasoning seem more powerful than it actually is. If your bug is within some reasonable distance of a pattern it has seen in training, reasoning can get it over the final hump. But if your problem is too far removed from what it has seen in its latent space, it's not likely to figure it out by reasoning alone.
it's meant in the literal sense but with metaphorical hacksaws and duct tape.
Early on, some advanced LLM users noticed they could get better results by forcing insertion of a word like "Wait," or "Hang on," or "Actually," and then running the model for a few more paragraphs. This would increase the chance of a model noticing a mistake it made.
I have run my own mail server for years and I rarely see spam. I'm running a classic Bayesian filter as outlined in the legendary PG post "A Plan For Spam" and it works very well. I don't really get all the fuss about this issue. When I do see a piece of unclassified spam I simply classify it and continue. For me this is a far better tradeoff than having all my most private mail on some bigcorp server where any nerd can rifle through it.
> For me this is a far better tradeoff than having all my most private mail on some bigcorp server where any nerd can rifle through it.
You've functionally given yourself very little extra privacy because the vast majority of emails you send or receive will still cross through BigCorp servers (whether Google, Microsoft, Intuit, or other).
You can do the work to run your own mail server, but so few other people do that one end of the conversation is still almost always feeding a corporation's data lake.
I agree with you but I still run my own mail server. If people like me stopped doing that, we would cede the entire email landscape to BigCorp. A sad fate to happen to one of the true decentralized protocols. It's like if we all just went back to AOL
It's more expensive and difficult to hack or get a warrant to access multiple bigcorp servers with a variety of privacy stances and jurisdictions, than it is to get access to a single one. Security is about making attacks expensive.
No single BigCorp employee can go through all my mail.
If you're not convinced, no problem, please continue to enjoy your BigCorp email service.
And yet, if you're communicating with someone else who does the same (or uses a niche hosted provider), that entire conversation is outside their "data lake".
As someone who's run my own email for 25 years or so (I'm really getting old...) my biggest problem is not that I receive spam (spamassassin mostly takes care of it) but that my sent emails get marked as spam by big email providers. Yahoo is the worst offender and seems to do so at some base despite my best efforts (spf, dkim, arc, and jumping through their registration hoops)
I'm running my own mail server for longer than I'd like to admit, but not for my critical/key email addresses. Looking at the spam filtering I get in Gmail and knowing my endless fights with spamassassin and DSBLs I know I could never achieve that.
The only upside of having an actual mail server is the ability to say "this is incorrect, no one ever tried to send an email to this address/from this IP" or custom 55x messages.
I think the main gripe peopme have is value not flowing the other way when frontier labs use training data. I think this poisoning is intended to be somewhat of a DRM feature, where if you play nice and pay people for their data then you gey real data, if you steal you get poisoned
That could be a potential path, but the site doesn’t read like that at all. It seems more binary to me, basically saying ‘AI is a threat, and here is how we push back.’
That is a good point. You'd be interesting to determine the value of any content used as training. I suspect it would be something like 1* 10^-6 cents. I think it'd be much more useful to take some share of the profits and feed it back into some form of social fund used to provide services for humans.
reply