Hacker Newsnew | past | comments | ask | show | jobs | submit | causalscience's commentslogin

If they're actually self-driving they should be able to drive around the obstacles just as well or better than human.

That's because those people are all brainwashed.

They were told: buy guns because freedom, and they repeat "we buy guns because freedom".

Then they were told: never mind freedom, lets shoot this unarmed person, and they repeat "never mind freedom, shoot the person".

"we need our guns to protect our freedom against the government" idea could have some merit, but the reason right wingers say it is different. They say it because that meme has infected them, and uses them to replicate. A meme in the original sense

> A meme is a term referring to a unit of cultural information transferable from one mind to another.


The distinction between promotion and spam is solely that the promotion side can be entertaining enough that the users will willingly chase it.

The fact that you leave unsaved work overnight is the actual crazy part.

Why though? On Mac, I have tons of unsaved work: many TextEdit windows which keep their state for many months, even through reboots. And it has been working like for at least 10 years. It's such a simple, little quality-of-life thing. And Microsoft just doesn't care.

This is what a computer should be doing: helping the user to get their work done, without the user having to worry about insignificant details about saving files. E.g. does Google Docs ever ask where to save a file before closing the browser or shutting down the computer? No you just get an untitled document that is automatically saved. If I want to rename it or save it in a different location, I am free to do so. But as long as I don't, it doesn't get in the way and just persists stuff automatically.


I don't disagree, but you have to know which applications reliably keep their state across restarts. You can't blindly rely on it on any desktop system. The Microsoft Office applications actually do auto-save documents since a couple of years ago, even though the recovery UX can be a bit awkward.

What Microsoft doesn't care about is that you may have applications running that don't do that, when Windows reboots for updates.


On macOS the feature is baked into the OS's APIs, the app developer just opts into using them. If they don't, quitting with unsaved work will prompt the user modally, and block the restart to the point where the OS will timeout the reboot process and give up. The only way to purposefully lose unsaved work in almsot every app I've ever used on macOS is to yank the power cable or hold the power button down.

Window locations and app state are written to plist files, again, using OS libraries and APIs for app resume. I can reboot my Mac and not even realize it happened sometimes it all comes back the way it was.


The blocking happens on Windows as well, except that the timeout logic is the reverse: it force-quits the applications then, because presumably the potential security update is more important.

Yep. On Mac (and Linux, actually) I know of some applications that do that. I also know that on Windows most applications don't do that. I would also never leave un-saved work open on Windows.

I was replying to: "The fact that you leave unsaved work overnight is the actual crazy part". As long as you know which apps auto-save and know you can somewhat rely on them, it's not so crazy.


> Why though?

> Microsoft just doesn't care.

So you know why. Also, Macs have other apps besides textedit, do all of them preserve unsaved docs across restarts?

> what a computer should be doing

Ok, but the discussion is about reality


> Macs have other apps besides textedit, do all of them preserve unsaved docs across restarts?

Every Mac app I’ve ever used does.

I don’t really care though, I reboot at most once ever six months


Most stuff on my mac seems ok. The clunkiest is the Microsoft software - Word and Excel but even that sort of works.

[flagged]


Of course, everyone has their own workflow. I won't tell anyone to adjust their workflow. But the exact point I was trying to make is that it's not random apps. It's specific apps that one knows about and how they behave. And once you know those apps (like TextEdit, Google Docs, etc) you can pretty much rely on it to survive reboots and power outages.

Personally it's rare that I leave something unsaved. That said it has never been an issue on macOS in 20 years.

There's plenty of tasks that can take hours that don't save their progress. E.g. running a simulation, training an AI model, rendering video. Or, these days, leaving agentic AI models running in a loop implementing tasks.

Even if the state is recoverable, it doesn't mean that it's simple to recover.

I would be infuriated if my OS decided to shut itself down without permission.


Huh?

I use a mac and a linux box. I'd never cross my mind that I cant leave some unsaved changes overnight. I leave unsaved changes for weeks across the many things I am working on.


Worse, I often wondered how some people collaborated. Now I know that many people would rather have a chunk of the population rounded up and killed than lose their job.


"Whoever can make you believe absurdities, can make you commit atrocities." and "It is difficult to get a man to understand something, when his salary depends on his not understanding it."

etc, etc. So it goes


I've been hearing for years people say "Signal requires phone number therefore I don't use it", and I've been hearing them mocked for years.

Turns out they were right.


They weren't though? Signal requires a phone number to sign up and it is linked to your account but your phone number is not used in the under the hood account or device identification, it is not shared by default, your number can be entirely removed from contact disovery if you wish, and even if they got a warrant or were tapping signal infra directly, it'd be extremely non trivial to extract user phone numbers.

https://signal.org/blog/phone-number-privacy-usernames/

https://signal.org/blog/sealed-sender/

https://signal.org/blog/private-contact-discovery/

https://signal.org/blog/building-faster-oram/

https://signal.org/blog/signal-private-group-system/


In past instances where Signal has complied with warrants, such as the 2021 and 2024 Santa Clara County cases, the records they provided included phone numbers to identify the specific accounts for which data was available. This was necessary to specify which requested accounts (identified by phone numbers in the warrants) had associated metadata, such as account creation timestamps and last connection dates.


Yep however that only exposes a value of "last time the user registered/verified their account via phone number activation" and "last day the app connected to the signal servers".

There isn't really anything you can do with that information. The first value is already accessible via other methods (since the phone companies carry those records and will comply with warrants). And for pretty much anyone with signal installed that second value is going to essentially always be the day the search occurred.

And like another user mentioned, the most recent of those warrants is from the day before they moved to username based identification so it is unclear whether the same amount of data is still extractable.


I would think being able to subpoena records for all active signal users would be a cause for concern.

Ironically enough Reddit seems to have a pretty good take on this: https://www.reddit.com/r/law/comments/1qogc2g/comment/o21aeh...

I was genuinely surprised when I went to Reddit and saw that as the most voted comment on the story.


I think that's a fair assessment on their part however it's worth noting that your phone number does not serve as your account ID. It can be used to look up an account but there are caveats to that.

The lookups go through a secure enclave, the system is architected to limit the number of lookups that can be done, and the system has some fairly extensive anti-exfiltration cryptographic fuckery running inside the secure enclave to further limit the extent to which accounts can be efficiently looked up.

And of course you can also remove your phone number from contact discovery (but not from the acct entirely) but I'm not sure how that interacts with lookup for subpoenas. If they use the same system that contact discovery uses, it may be an undocumented way to exclude your account from subpoena responses.

The rest of what they say however is pretty spot on. The priority for signal is privacy, not anonymity. They try to optimise anonymity when they can but they do give up a little anonymity in exchange for anti-spam and user-friendliness.

So of course the ending notes of "use a VPN, configure the settings to maximise anonymity, and maybe even get a secondary phone number to use with it" are all perfectly reasonable suggestions.


This was before Signal switched to a username system.


Others mention you must still register with a phone, although you can remove it from your account after you go through the username stuff? Usually HN is pretty good about identifying that the default path is the path and that opt-out like behavior of this means very little for mass usage.


It's not that you can remove it from your account entirely. Your account is still linked to that number. It's that you can remove the number from contact discovery.

And re: defaults the default behavior on signal is that your phone number is hidden from other users but it can be used to do contact discovery. Notably though you can turn contact discovery off (albeit few people do).


Which of those links actually say that your phone number is private from Signal? If anything, this passage makes it sound like it's the reverse, because they specifically call out usernames not being stored in plaintext, but not phone numbers.

>We have also worked to ensure that keeping your phone number private from the people you speak with doesn’t necessitate giving more personal information to Signal. Your username is not stored in plaintext, meaning that Signal cannot easily see or produce the usernames of given accounts.


> it'd be extremely non trivial

Extremely non trivial. What I'm hearing is "security by obfuscation".


Absolutely nothing in this article is related to feds using conversation metadata to map participants, so, no they weren’t.


If you follow the X chatter on this, some folks got into the groups and tracked all the numbers, their contributions, and when they went "on shift" or "off".

I don't really think Signal tech has anything to do with this.


Yeah. It's notable they didn't crack the crypto. In the 90s when I was a young cypherpunk, I had this idea that when strong crypto was ubiquitous, certainly people would be smart enough to understand its role was only to force bad guys to attack the "higher levels" like attacking human expectations of privacy on a public channel. It was probably unrealistic to assume everyone would automatically understand subtle details of technology.

As a reminder... if you don't know all the people in your encrypted group chat, you could be talking to the man.


That’s really interesting extra context, thanks!


My Session and Briar chats don't give out the phone numbers of other users.


Yes, but they have their own weaknesses. For instance, Briar exposes your Bluetooth MAC, and there's a bunch of nasty Bluetooth vulns waiting to be exploited. You can't ever perfectly solve for both security and usability, you can only make tradeoffs.


Briar has multiple modes of operation. The Bluetooth mode is not the default mode of operation and is there for circumstances where Internet has been shut down entirely.

For users who configure Briar to connect exclusively over Tor using the normal startup (e.g., for internet-based syncing) and disable Bluetooth, there is no Bluetooth involvement at all, so your Bluetooth MAC address is not exposed.


Neither does Signal.


Both Session and Briar are decentralized technologies where you would never be able to approach a company to get any information. They operate over DHT-like networks and with Tor.

Signal does give out phone numbers when the law man comes, because they have to, and because they designed their system around this identifier.


This changed about two years ago, when they added usernames. ( https://signal.org/blog/phone-number-privacy-usernames/ )

Signal can still tell law enforcement (1) whether a phone number is registered with Signal, and (2) when that phone number signed up and (3) when it was last active. That's all, and not very concerning to me. To prevent an enumeration attack (e.g. an attacker who adds every phone number to their system contacts), you can also disable discovery my phone number.

While Session prevents that, Session lacks forward secrecy. This is very serious- it's silly to compare Session to Signal when Session is flawed in its cryptography. (Details and further reading here https://soatok.blog/2025/01/14/dont-use-session-signal-fork/ ). Session has recently claimed they will be upgrading their cryptography in V2 to be up to Signal's standard (forward secrecy and post-quantum security), but until then, I don't think it's worth considering.

I agree that Briar is better, but unfortunately, it can't run on iPhones. I'm in the United States and that excludes 59% of the general population, and about 90% of my generation. It's not at fault of the Briar project, but it's a moot point when I can't use it to talk to people I know.


[flagged]


We don't do the "duct-tape an insult to the end to drive your point harder" gimmick here. It will lead to loss of your account.


whoa, losing access to a throwaway account created for specifically posting trolling comments? i'm sure they're shaking in their boots at the prospect


This throwaway account wasn't created specifically for posting trolling comments, this is just my personality :-(


Signal's use of phone numbers is the least of your issues if you've reached this level of inspection. Signal could be the most pristine perfect thing in the world, and the traffic from the rest of your phone is exactly as exposing as your phone number is when your enemy is the US government who can force cooperation from the infrastructure providers.


Your point is correct but irrelevant to this conversation.

The question here is NOT "if Signal didn't leak your phone number could you still get screwed?" Of course you could, no one is disputing that.

The question is "if you did everything else perfect, but use Signal could the phone number be used to screw you?" The answer is ALSO of course, but the reason why we're talking about it is that this point was made to the creator of Signal many many times over the years, and he dismissed it and his fanboys ridiculed it.


I talked to Moxie about this 20 years ago at DefCon and he shrugged his shoulders and said "well... it's better than the alternative." He has a point. Signal is probably better than Facebook Messenger or SMS. Maybe there's a market for something better.


Is there any reason they didn't use email? It seems like something that would have been easier to keep some anonymity., while still allowing the person to authenticate.

email is notoriously insecure and goes through servers that allow it to be archived. also, email UIs tend not to be optimized for instantaneous delivery of messages.

I wasn't assuming the actual messages would go through email. I assumed they just needed that for a onetime setup. Isn't that the only reason for using a phone number currently?

Briar and Session are the better encrypted messengers.


Session lacks forward secrecy, which isn't ideal.


I remember listening to his talks and had some respect for him. He could defeat any argument about any perceived security regarding any facet of tech. Not so much any more. He knows as well as I do anything on a phone can never be secure. I get why he did it. That little boat needed an upgrade and I would do it too. Of course this topic evokes some serious psychological responses in most people. Wait for it.


> He knows as well as I do anything on a phone can never be secure

I assume because of the baseband stuff to be FCC compliant? Last I checked that meant DMA channels, etc. to access the real phone processor. All easily activated over the air.


All easily activated over the air.

Indeed. The only reason this is not used by customer support for more casual access, firmware upgrades and debugging is a matter of policy and the risk of mass bricking phones and as such this is not exposed to them. There are other access avenues as well including JTAG debugging over USB and Bluetooth.


I don't think the FCC requires DMA channels. That's done out of convenience because it's how PCIe works.


The FCC doesn't require DMA channels, but the baseband processor may have access to it among anything else.


That's done for convenience because that's how PCIe works.


Any citation on this? I’ve never heard that.


47 CFR Part 2 and Part 15

FCC devices are certified / allowed to use a spectrum, but you must maintain compliance. If you're a mobile phone manufacturer you have to be certain that if a bug occurs, the devices don't start becoming wifi jammers or anything like that.

This means you need to be able to push firmware updates over the air (OTA). These must be signed to avoid just anyone to push out such an OTA.

The government has a history of compelling companies to push out signed updates.


There are hobbyist groups that tinker with these things. They are just as lazy as me and do not publish much. One has to find and participate in their semi-private .onion forums. Not my cup of tea. Most of it goes over my head and requires special hardware I am not interested in tinkering with.


I have no idea if that was true 20 years ago, but it's not true now. XMPP doesn't have this problem; your host instance knows your IP but you can connect via Tor.


Tor has the problem that you frequently don't know who's running all the nodes in the network. For a while the FBI was running Tor exit nodes in an attempt to see who messages were being sent to. maybe they still are.

OTR has been on XMPP for so long now


Is that good? According to the wikipedia page it seems last stable release was 9 years ago. Is anyone using that? Last time I had a look at XMPP everybody was using OMEMO.



Sorry, I don't pay attention to anyone who disses PGP. I don't care if it's easy to misuse. I focus on using it well instead of bitching about misusing it.

If there's one thing we learned from Snowden is that the NSA can't break PGP, so these people who live in the world of theory have no credibility with me.


Before my arrest (CFAA) I operated on Tor and PGP for years. I had property seized and I had a long look at my discovery material, as I was curious which elements they had obtained.

I never saw a single speck of anything I ever sent to anyone via PGP in there. They had access to my SIGAINT e-mail and my BitMessage unlocked, but I used PGP for everything on top of that.

Stay safe!


Would be curious to know (if you're willing to share) how you were found if you were working to obscure / encrypt your communications. What _was_ it that ultimately gave you away or allowed them to ID you?

I'd be curious as well, though I completely understand if they don't want to talk. Someone should write a book just listing the usual mistakes.

if you sign PGP messages with a key you associated with your identity, the have high confidence you sent emails signed with that key. i.e. - PGP does not offer group deniable signatures as a default option.

So what? Whether this matters depends on your threat model, but you present this as a universal concern. Yes, we know, and we use it appropriately.

wow. that's a phenomenally bad policy. There are many legit critiques which can be leveled at PGP, depending on your use case. [Open]PGP is not a silver bullet. You have to use it correctly.

"You have to use it correctly" is true for everything. Stop parroting garbage you read and exercise some critical thinking.

says the 8 day old sock puppet.

It's not a sock puppet in the usual sense. Every time I log in I create a new account, and it lasts until I get logged out for whatever reason. But I'm not having conversations between multiple accounts that I control, if that's what you mean.

my mom can use signal no problem. she doesnt know what half the words in your comment mean, though.


I could have sworn Signal adopted usernames sometime back, but in my eyes its a little too late.

Suppose they didn't require that. Wouldn't that open themselves up to DDoS? An angry nation or ransom-seeker could direct bots to create accounts and stuff them with noise.


I think the deal is you marry the strong crypto with a human mediated security process which provides high confidence the message sender maps to the human you think they are. And even if they are, they could be a narc. Nothing in strong crypto prevents narcs in whom ill-advised trust has been granted from copying messages they're getting over the encrypted channel and forwarding them to the man.

And even then, a trusted participant could not understand they're not supposed to give their private keys out or could be rubber-hosed into revealing their key pin. All sorts of ways to subvert "secure" messaging besides breaking the crypto.

I guess what I'm saying is "Strong cryptography is required, but not sufficient to ensure secure messaging."


Yes. Cheap–identity systems such as Session and SimpleX are trivially vulnerable to this, and your only defence is to not give out your address as they are unguessable. If you have someone's address, you can spam them, and they can't stop it except by deleting the app or resetting to a new address and losing all their contacts.

SimpleX does better than Session because the address used to add new contacts is different from the address used with any existing contact and is independently revocable. But if that address is out there, you can receive a full queue of spam contacts before you next open the SimpleX app.

Both Session and SimpleX are trivially vulnerable to storage DoS as well.


There are a lot of solutions to denial of service attacks than to collect personal information. Plus, you know, you can always delete an account later? If what Signal says is true, then this amounts to a few records in their database which isn't cause for concern IMO


Happy to see mastodon.xyz score 100%.

Mastodon is pretty cool and proof that we can make federation work.


Yeah or worse like my boss. We don't have a style guide. But he always wants style changes in every PR, and those style changes are some times contradictory across different PRs.

Eventually I've told him "if your comment does not affect performance or business logic, I'm ignoring it". He finally got the message. The fact that he accepted this tells me that deep down he knew his comments were just bike shedding.


I've been in teams like this - people who are lower on the chain of power get run in circles as they change to appease one, then change to appease another then change to go back to appease the first again.

Then, going through their code, they make excuses about their code not meeting the same standards they demand.

As the other responder recommends, a style guide is ideal, you can even create an unofficial one and point to it when conflicting style requests are made


> Then, going through their code, they make excuses about their code not meeting the same standards they demand.

Yes!! Exactly. When it comes to my PRs, he once made this snarky comment about him having high expectations in terms of code quality. When it comes to his PRs, he does the things he tells me not to do. In fact, I once sent him a "dis u?" with a link to his own code, as a response to something he told me I shouldn't do. To his credit he didn't make excuses, he responded "I could've done better there, agreed".

In general he's not bad, but his nitpicking is bad. I don't really understand what's going on in his mind that drives this behavior, it's weird.


You should have a style guide, or adopt one. Having uniform code is incredibly valuable as it greatly reduces the cognitive load of reading it. Same reason that Go's verbose "err != nil" works so well.


Style guidelines should be enforced automatically. Leaving that for humans to verify is a recipe for conflict and frustration.


Ideally yes, but there's plenty of cases where that's not desirable or possible.

For example, most people would agree you should use exhaustive checks when possible(matching in rust, records in typescript, etc.). But how do you enforce that? Ban other types of control flow? But even if you find a good balanced way to enforce it, you won't always want to enforce it. There's plenty of good use cases where you explicitly don't want a check to be exhaustive. At which point now you gotta make sure there's an escape mechanism to whatever crackhead check you've setup. Better to just leave a comment with a link to your style guide explaining why this is done. Many experienced developers that are new to rust or typescript simply never think of things like this, so it's worthwhile to document it.


He probably did the calculation that 707 - 200 = 507 and 200/507 = 0.39 which is different from 0.46.


You're not crazy, I'm also always disappointed.

My theory is that the people who are impressed are trying to build CRUD apps or something like that.


so 99% of all software?


Exactly. That's why you see so many people say it works great. And the rest of us are like "am I the crazy one?" No, you just don't build CRUD apps.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: