Deliverability to Microsoft famously took a dive a bit over a year ago due to random arbitrary failures within their infrastructure causing DMARC/DKIM problems which they clearly were having problems diagnosing.
Even with a six-figure email spend and weeks of troubleshooting the best response we could get from our mail provider was that they were having problems getting traction with Microsoft on the issue.
Worth mentioning is that there are several email umbrellas under Microsoft... including the newer office/365, the slightly older outlook.com hosting, the old corp hosting and hotmail and sub-properties... each with different rules and services to determine spam in inconsistent ways between them.
One of my main emails is still on a "free" outlook.com hosted with a personal domain that I never shifted to paid 365. I've also got an MTA server (mailu) of my own that I've been testing with... my own email under outlook.com is literally the only one of the MS systems I can't seem to deliver to, the rest work fine. Same for google.com for that matter... kinda wild.
This issue didn't seem to discriminate. I was seeing deliverability failures to Office 365 clients as well as consumer-facing brands like hotmail and msn
With a broad statement like this, I would usually just suggest this is inflammatory and surely overstated.
However, I've also worked at a financial institution which used core systems by Harland Financial Systems. Their "encryption" for data in transit from teller workstations to the core system was just a two byte XOR, and they sent the key at the beginning of the connection!
Was so unbelievable to be able to crack this in under a half-hour after noticing patterns in a PCAP. Wouldn't have believed it if I hadn't seen it with my own eyes.
That fraud was good enough for our regulators and theirs, so I have no doubt the industry is filled with rotten incompetence through and through.
Indeed. I'm confused by this line from the article
> a study with 197 participants, the team could infer the identity of persons with almost 100% accuracy – independently of the perspective or their gait.
The paper seems to make it clear that the technique still depends on gait analysis, but claims it's more robust against gait variations.
Given the number of gait analysis publications over several decades using varying techniques, can you recommend a good review article disproving all of them?
Given the number of publications about curing <pick your uncured disease> over several decades using varying techniques, can you recommend a good review article disproving all of them?
Answer: no need, if it had been cured, it would be cured. And it is not.
My point being that many publications saying "towards X" may mean that we are making some progress towards X, but they don't mean at all that X is possible.
I don’t think anyone has ever tried to publish something disproving all of the gait analysis claims. That would be an odd sort of thing. But I have not seen anything come to something that we could call productized and reliable. It’s relatively easy to publish theoretical papers. Much harder to show it working reliably in the wild.
The approach described in the article is much different and more interesting, as it's passive and doesn't require any electronics on the individual being identified.
> The problem is that the vulnerability exploited by salt typhoon is a systemic flaw implemented at the demand of Cantwell and other of our legislative morons.
Assuming you're talking about CALEA, I find it hard to blame Cantwell personally given that she first joined the House in 1993, and CALEA was passed in 1994. She wasn't in much of a position to "demand" anything against the headwinds of a bipartisan bill passed in both chambers by a voice vote.
The point remains that she's pretending the problem is AT&T, when really it is the US government's demand for a backdoor.
This should be trumpeted as an example of why we cannot mandate encryption backdoors in chat, unless we want everybody to have access to every encrypted message we send.
I don't agree. I have two theories about these overused patterns, because they're way over represented
One, they're rhetorical devices popular in oral speech, and are being picked up from transcripts and commercial sources eg, television ads or political talking head shows.
Two, they're popular with reviewers while models are going through post training. Either because they help paper over logical gaps, or provide a stylistic gloss which feels professional in small doses.
There is no way these patterns are in normal written English in the training corpus in the same proportion as they're being output.
> Two, they're popular with reviewers while models are going through post training. Either because they help paper over logical gaps, or provide a stylistic gloss which feels professional in small doses.
I think this is it. It sounds incredibly confident. It will make reviewers much more likely to accept it as "correct" or "intelligent", because they're primed to believe it, and makes them less likely to question it.
I raised this point on a previous Cloudflare blog post - they've turned quite vapid these days. If you pay attention, they're stuffed to the brim with generated text which is sloppy and under-opinionated on the audience for the writing in the first place.
Even with a six-figure email spend and weeks of troubleshooting the best response we could get from our mail provider was that they were having problems getting traction with Microsoft on the issue.
reply