Hacker Newsnew | past | comments | ask | show | jobs | submit | f_devd's commentslogin

> Isn't it the employees' responsibility pay for their union membership?

No, contributions are handled by the employer/company


Individuals pay like 25-30 Euronth contribution, which is tax deductible. Employers can pay a lot, like 10% or more, which is often going to a social security fund.


Remind me to never look at twitter replies again, by far most counterproductive threads I've seen


70% is bot traffic, the rest is brain damaged terminally online human shells.

It really isn't representative of the real world average human intelligence and capacity to debate or even discuss ideas.


I hope the bots get the vote eventually. /s


No, it's gc-like. Up to 4x slowdown iirc


To better port C to Rust: 3C (Checked C), c2rust, Crown ownership analysis, RustMap, c2saferrust (LLM), Laertes


Where do you detect malice? The claims are quite accurate.


Accurate? Lets take the Wifi (Other users already commented the other ones). Open a wifi access point with the name of the restaurant, intercept the DNS requests and serve your filtered stuff.

PS: If the text is real and not trolling, the keyword in the text is 'rarely happen', which we could apply to car seatbelts then.


Then what? The user presumably sees TLS certificate warnings since you don't have valid certicates. HSTS would prevent downgrades to plain HTTP and is pretty common on sensitive websites.

Isn't the better advice to avoid clicking through certificate warnings? That applies both on and off open wifi networks.

There is a privacy concern, as DNS queries would leak. Enabling strict DoH helps (which is not the default browser setting).


I am afraid that it is not only about privacy (that they recommend ignoring), there are many options to chose, like CA vectors, lets say TrustCor (2022), e-Tugra (2023), Entrust (2024), Packet injection vectors, or Click here or use your login first vectors as you commented, bugs and configurations.

This ones known. Therefore I just cannot believe that those who wrote the open letter did not even though about such significant events from the past year, I remark the past year, or even on zero-days.

We are talking about people connecting to an unknown unsupervised network, that we do not know what new vulnerabilities will be published on main stream also, and the ones of the open letter know it because they are hiding behind the excuse of "rarely".


> like CA vectors

This gets complicated because you're not safe on your home or corporate network either when CAs are breached. The incident everyone talks about, DigiNotar (2011), had stolen CA keys issuing certificates that intercepted traffic across several ISPs. If that's the threat you're looking to handle, "avoid public wifi" isn't the right answer. Perhaps you're doing certificate pinning, application level signing, closed networks, etc.

> Entrust (2024)

I recently wrote a blog post[1] about CA incidents, so I notice this one isn't like the others. Entrust's PKI business was not impacted by the hack and Entrust remains a trusted CA.

> Click here or use your login

Password manager autofill is the solution there, both on public wifi and on a corporate network. Perhaps an ad blocker as well.

> people connecting to an unknown unsupervised network

Aren't most people's home networks "unsupervised"?

[1] https://alexsci.com/blog/ca-trust/


Why do you talk about home networks "unsupervised" when we are talking about public networks, access points, created to hunt people?

Do you notice that your proposed solutions try to fix a problem, isn't it? The open letter does not propose solutions; it merely denies them.

It is needed to be sincere with people, those "incidents" have happened for a long time, and unfortunately will keep happening (given the history), bad actors hunting, yesterday the CAs, and tomorrow? So if one connect to an open wifi one may fall victim to a trap, probably not at home but in an Airport or other crowded places with long waits, and even if you do not browse another app in background will be trying to do it.

It was needed many years to make people just sightly aware, and now they -if the text is real- pretend to undo it. But to be sincere I really do not mind much, I just perceive that open letter as malicious.


CA compromise feels like an exotic attack, beyond what "everyday people and small businesses" should worry about. There's no solution to CA compromise offered because the intended audience is not getting hacked in that way. If your concern is that high risk individuals need different advice, I agree, but the letter also makes that clear they are not the focus.

Are there specific, modern examples of CA compromise being used to target low-risk individuals? Is that a common attack vector for low-risk individuals and small businesses?


And how exactly do you plan to forge the SSL certificates to deliver your filtered contents?


> intercept the DNS requests and serve your filtered stuff.

How do you get from a malicious DNS response to a browser-validated TLS cert for the requested host?


what filtered stuff?

you mean partial web pages?

most browsers use DNS over HTTPS


> I have no idea why I should be against using LLM

It highly depends on your own perspective and goals, but one of the arguments I agree with is that habitually using it will effectively prevent you building any skill or insight into the code you've produced. That in turn leads to unintended consequences as implementation details become opaque and layers of abstraction build up. It's like hyper-accelerating tech-debt for an immediate result, if it's a simple project with no security requirements there would be little reason to not use the tool.


I never started for similar reasons


I have the same with journals, but the video archiving has actually come up a few times, still fairly rare though. I think the difference is that you control the journal (and so rarely feel like you need it's content) while the videos you're archiving are by default outside of your control and can be more easily lost.


A more modern approach of doing the same to use polymerized quantum dots (I believe it emits wide spectrum white when a voltage is applied), and passing that through a quantum dot film to get any specific wavelength.


I do not think this is the case, there has been some research into brainrot videos for children[0], and it doesn't seem to trend positively. I would argue anything 'constructed' enough will not classify as far on the brainrot spectrum.

[0]: https://www.forbes.com/sites/traversmark/2024/05/17/why-kids...


Yeah, I don't think surrealism or constructed is good in the early data mix, but as part of mid or post-training seems generally reasonable. But also, this is one of those cases where anthropomorphizing the model probably doesn't work, since a major negative effect of Cocomelon is kids only wanting to watch Cocomelon, while for large model training, it doesn't have much choice in the training data distribution.


I would a agree a careful and very small amount of above brainrot in post-training could improve certain metrics, if the main dataset didn't contain any. But given how much data current LLMs consume and how much is being produced and put back into the cycle I doubt it will miss be missed


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: