Hacker Newsnew | past | comments | ask | show | jobs | submit | samuel's commentslogin

Is this source trustable? (I have no idea, I'm not german)

https://www.welt.de/vermischtes/kriminalitaet/article2521783...


The source is somewhat trustable mainstream but not really good, as already the headline is wrong. The other person was not jailed for insults against the rapists, but threats of violence. And that is a attack on the state monopol of violence itself, hence the harsh sentence.

But indeed, the rulings against the rapists don't seem allright and very much out of balance with the other sentence.


It appears that the ruling regarding the rapes was not so straight forward [1], certainly not something that you can use as a one-line argument. There are also other articles describing what presumably happened there in 2020.

Regarding the case of 'Maja R.', here's a summary [2] (e.g. she didn't show up for the first two hearings [3] - that would certainly raise the anger of the righteous if somebody not in their favor did that).

I'm in doubt whether this one case is sufficient to prove the downward spiral that some people claim to perceive (it was also brought up in context of migration here on HN recently, and from the sources which I could find I‘m not sure it fully qualifies there either).

[1]: https://www.mopo.de/hamburg/details-aus-dem-prozess-darum-ka... [2]: https://skeptics.stackexchange.com/a/57113 [3]: https://www.mopo.de/hamburg/ehrloses-vergewaltigerschwein-20...


I agree with the sentiment, but I think it's a pretty naive view of the issue. Companies will want all info they can in case some of their workers does something illegal-inappropiate to deflect the blame. That's a much more palpable risk than "local CA certificates being compromised or something like that.

And some of the arguments are just very easily dismissed. You don't want your employer to see you medical records? Why were you browsing them during work hours and using your employers' device in the first place?


TLS inspection can _never_ be implemented in a good way, you will always have cases where it breaks something and most commonly you will see very bad implementations that break most tools (e.g. it is very hard to trust a new CA because each of OS/browser/java/python/... will have their own CA store)

This means devs/users will skip TLS verification ("just make it work") making for a dangerous precedent. Companies want to protect their data? Well, just protect it! Least privilege, data minimization, etc is all good strategies for avoiding data leaking


Sure it can; it just requires endpoint cooperation, which is a realistic expectation for most corporate IT shops.


You also need some decent support + auditing. There are a couple of places to configure (e.g. setting CURL_CA_BUNDLE globally covers multiple OSS libraries) but there will be cases where someone hits one of the edge clients and tries to ignore the error, which ideally would lead to a scanner-triggered DevOps intervention. I think a fair amount of the rancor on this issue is really highlighting deeper social problems in large organizations, where a CIO should be seeing that resentment/hostility toward the security group is a bigger risk than the surface problem.


I’m all for privacy of individuals, but work network is not a public internet either.

A solution is required to limit the network to work related activities and also inspect server communications for unusual patterns.

In one example someone’s phone was using the work WiFi to “accidentally” stream 20 GB of Netflix a day.


What's the security risk of someone streaming Netflix?

There are better ways to ensure people are getting their work done that don't involve spying on them in the name of "security".


Security takes many forms, including Availability.

Having branch offices with 100 Mbps (or less!) Internet connections is still common. I’ve worked tickets where the root cause of network problems such as dropped calls ended up being due to bandwidth constraints. Get enough users streaming Spotify and Netflix and it can get in the way of legitimate business needs.

Sure, there’s shaping/qos rules and dns blocking. But the point is that some networks are no place for personal consumption. If an employer wants to use a MITM box to enforce that, so be it.


I think that's a very loose interpretation of Availability in the CIA triad.

This looks a lot like using the MITM hammer to crack every nut.

If this is an actual concern, why not deny personal devices access to the network? Why not restrict the applications that can run on company devices? Or provide a separate connection for personal devices/browsing/streaming?

Why not treat them like people and actually talk to them about the potential impacts. Give people personal responsibility for what they do at work.


Yes, but also it’s not an employer’s job to provide entertainment during work hours on a factory floor where there are machines that can kill you if you’re not careful.

There’s a famous fable where everyone is questioning the theft victim about what they should’ve done and the victim says “doesn’t the thief deserve some words about not stealing?”

Similarly, it’s a corporate network designed and controlled for work purposes. Connecting your personal devices or doing personal work on work devices is already not allowed per policy, but people still do it, so I don’t blame network admins for blocking such connections.


I agree with all you said, but it's not like it is well advertised by the companies--they should come right out and say "we MITM TLS" but they don't. It's all behind the scenes smoke and mirrors.


I agree, that’s a bad business practice.

Normally no personal device have the firewall root certs installed, so they just experience network issues from time to time, and dns queries and client hello packets are used for understanding network traffic.

However, with recent privacy focused enhancements, which I love by the way because it protects us from ISP and other, we (as in everybody) need a way to monitor and allow only certain connections in the work network. How? I don’t know, it’s an open question.


It’s not at all a loose interpretation.

Availability: Ensures that information and systems are accessible and operational when needed by authorized users


I would still say that is loose — are connection issues caused by staff using streaming services generally considered to be DoS?

And on balance I'd say losing Integrity is a bad trade off to make here.


What’s wrong with watching Netflix at work instead of working? That’s not for me to say, but I understand employers not wanting to allow it.


In Europe they prefer not to go to jail for privacy violations. It turns out most of these "communist" regulations are actually pretty great.


Does GDPR (or similar) establish privacy rights to an employee’s use of a company-owned machine against snooping by their employer? Honest question, I hadn’t heard of that angle. Can employers not install EDR on company-owned machines for EU employees?


(IANAL) I don't think there is a simple response to that, but I guess that given that the employer:

- has established a detailed policy about personal use of corporate devices

- makes a fair attempt to block work unrelated services (hotmail, gmail, netflix)

- ensures the security of the monitored data and deletes it after a reasonable period (such as 6–12 months)

- and uses it only to apply cybersecurity-related measures like virus detection, UNLESS there is a legitimate reason to target a particular employee (legal inquiry, misconduct, etc.)

I would say that it's very much doable.

Edit: More info from the Dutch regulator https://english.ncsc.nl/publications/factsheets/2019/juni/01...


It has to have a good purpose. Obviously there are a lot of words written about what constitutes a good purpose. Antivirus is probably one. Wanting to intimidate your employees is not. The same thing applies to security cameras.

Privacy laws are about the end-to-end process, not technical implementation. It's not "You can't MITM TLS" - it's more like "You can't spy on your employees". Blocking viruses is not spying on your employees. If you take the logs from the virus blocker and use them to spy on your employees, then you are spying on your employees. (Virus blockers aiming to be sold in the EU would do well not to keep unnecessary logs that could be used to spy on employees.)


Yes, at least in the Netherlands it is generally accepted that employees can use your device personally, too.

Using a device owned by your company to access your personal GMail account does NOT void your legal right to privacy.


So does nobody in Europe use an EDR or intercepting proxy since GDPR went into force?


I have found a definite answer from the Dutch Protection Agency (although it could be out of date).

https://english.ncsc.nl/binaries/ncsc-en/documenten/factshee...


What’s the definitive answer? From what I can tell that document is mostly about security risks and only mentions privacy compliance in a single paragraph (with no specific guidance). It definitely doesn’t say you can or can’t use one.


That's probably because there is no answer. Many laws apply to the total thing you are creating end-to-end.

Even the most basic law like "do not murder" is not "do not pull gun triggers" and a gun's technical reference manual would only be able to give you a vague statement like "Be aware of local laws before activating the device."

Legal privacy is not about whether you intercept TLS or not; it's about whether someone is spying on you, which is an end-to-end operation. Should someone be found to be spying on you, then you can go to court and they will decide who has to pay the price for that. And that decision can be based on things like whether some intermediary network has made poor security decisions.

This is why corporations do bullshit security by the way. When we on HN say "it's for liability reasons" this is what it means - it means when a court is looking at who caused a data breach, your company will have plausible deniability. "Your Honour, we use the latest security system from CrowdStrike" sounds better than "Your Honour, we run an unpatched Unix system from 1995 and don't connect it to the Internet" even though us engineers know the latter is probably more secure against today's most common attacks.


Okay, thanks for explaining the general concept of law to me, but this provides literally no information to figure out the conditions under which an employer using a TLS intercepting proxy to snoop on the internet traffic a work laptop violates GDPR. I never asked for a definitive answer just, you know, an answer that is remotely relevant to the question.

I don’t really need to know, but a bunch of people seemed really confident they knew the answer and then provided no actual information except vague gesticulation about PII.


Are they using it to snoop on the traffic, or are they merely using it to block viruses? Lack of encryption is not a guarantee of snooping. I know in the USA it can be assumed that you can do whatever you want with unencrypted traffic, which guarantees that if your traffic is unencrypted, someone is snooping on it. In Europe, this might not fly outside of three-letter agencies (who you should still be scared of, but they are not your employer).


Your question So does nobody in Europe use an EDR or intercepting proxy since GDPR went into force?

Given that a regulator publishes a document with guidelines about DPI I think it rules out the impossibility of implementing it. If that were the case it would simply say "it's not legal". It's true that it doesn't explicitly say all the conditions you should met, but that wasn't your question.


You can do it but you'd have to have a good case for it to trump the right to privacy.

It's not as simple as in the US where companies consider everything on company device their property even if employees use it privately.


They can, but the list of "if..." and "it depends..." is much longer and complicated, especially when getting to the part how the obtained information may be used


Yes. GDPR covers all handling of PII that a company does. And its sort of default deny, meaning that a company is not allowed to handle (process and/or store) your data UNLESS it has a reason that makes it legal. This is where it becomes more blurry: figuring out if the company has a valid reason. Some are simple, eg. if required by law => valid reason.

GDPR does not care how the data got “in the hands of” the company; the same rules apply. Another important thing is the pricipals of GDPR. They sort of unline everything. One principal to consider here is that of data minimization. This basically means that IF you have a valid reason to handle an individuals PII, you must limit the data points you handle to exactly what you need and not more.

So - company proxy breaking TLS and logging everything? Well, the company has valid reason to handle some employee data obviously. But if I use my work laptop to access privat health records, then that is very much outside the scope of what my company is allowed handle. And logging (storing) my health data without valid reason is not GDPR compliant.

Could the company fire me for doing private stuff on a work laptop? Yes probably. Does it matter in terms of GDPR? Nope.

Edit: Also, “automatic” or “implicit” consent is not valid. So the company cannot say something like “if you access private info on you work pc the you automatically content to $company handling your data”. All consent must be specific, explicit and retractable


What if your employer says “don’t access your health records on our machine”? If you put private health information in your Twitter bio, Twitter is not obligated to suddenly treat it as if they were collecting private health information. Otherwise every single user-provided field would be maximally radioactive under GDPR.


Many programmers tend to treat the legal system as if it was a computer program: if(form.is_public && form.contains(private_health_records)) move(form.owner, get_nearest_jail()); - but this is not how the legal system actually works. Not even in excessively-bureaucratic-and-wording-of-rules-based Germany.


Yeah, that’s my point. I don’t understand why the fact that you could access a bunch of personal data via your work laptop in express violation of the laptop owner’s wishes would mean that your company has the same responsibilities to protect it that your doctor’s office does. That’s definitely not how it works in general.


The legal default assumption seems to be that you can use your work laptop for personal things that don't interfere with your work. Because that's a normal thing people do.


I suspect they should say "this machine is not confidential" and have good reasons for that - you can't just impose extra restrictions on your employees just because you want to.

The law (as executed) will weigh the normal interest in employee privacy, versus your legitimate interest in doing whatever you want to do on their computers. Antivirus is probably okay, even if it involves TLS interception. Having a human watch all the traffic is probably not, even if you didn't have to intercept TLS. Unless you work for the BND (German Mossad) maybe? They'd have a good reason to watch traffic like a hawk. It's all about balancing and the law is never as clear-cut as programmers want, so we might as well get used to it being this way.


If the employer says so and I do so anyway then that’s a employment issue. I still have to follow company rules. But the point is that the company needs to delete the collected data as soon as possible. They are still not allowed to store it.


I’ll give an example in more familiar with. In the US, HIPPA has a bunch of rules about how private health information can be handled by everyone in the supply chain, from doctor’s offices to medical record SaaS systems. But if I’m running a SaaS note taking app and some doctor’s office puts PHI in there without an express contract with me saying they could, I’m not suddenly subject to enforcement. It all falls on them.

I’m trying to understand the GDPR equivalent of this, which seems to exist since every text fields in a database does not appear to require the full PII treatment in practice (and that would be kind of insane).


Recently learnt about tailscale funnel, and I love it, I would use for everything.

tailscale funnel --set-path <secret> <DIRECTORY>

(The path is needed because there are lots of bots who scan tailscale hostnames).

This works if the sender is tech savvy (and a tailscale user) but not in the other direction.


GPT actions allowed mostly the same functionality, I don't get the sudden scare about the security implications. We are in the same place, good or bad.

Btw it was already possible (but inelegant) to forward Gpt actions requests to MCP servers, I documented it here

https://harmlesshacks.blogspot.com/2025/05/using-mcp-servers...


Custom connectors are cool and a good selling point but they have to be remote (afaik there is no Le Chat Desktop) so using it with local resources it's not impossible, but hard to set up and not very practical (you need tail scale funnel or equivalent).


I had never expected that I would witness in my lifetime such advanced AI, and now (may be) extraterrestrial life.

If confirmed, the only remaining big mystery would beto know if there is intelligent life in any other part of the universe, which I understand is orders of magnitude more unlikely to confirm, but one can dream...


I think the other big mystery is why does the universe exist at all.


I think we are more likely to find extraterrestrial life than create proper AI and not just text summarizers/predictors


Neither of those things have actually happened. This is pure, what's the word. Cope?


One of these obviously hasn't happened, but it might be, hence my excitement. I don't know how likely the experts think it is (~1%, ~10%,etc...) but I guess the odds aren't high.

With regards to the other one (AI), I did not claim anything else than a subjective assesment. I did not expect to see an AI capable of mantaining a conversation aloud, for example. May be I'm easy to impress.


I'm currently reading Yudkowsky's "Rationality: from AI to zombies". Not my first try, since the book is just a collection of blog posts and I found it a bit hard to swallow due its repetitiveness, so I gave up after the first 50 "chapters" the first time I tried. Now I'm enjoying it way more, probably because I'm more interested in the topic now.

For those who haven't delved(ha!) into his work or have been pushed back by the cultish looks, I have to say that he's genuinelly onto something. There are a lot of practical ideas that are pretty useful for everyday thinking ("Belief in Belief", "Emergence", "Generalizing from fiction", etc...).

For example, I recall being in lot of arguments that are purely "semantical" in nature. You seem to disagree about something but it's just that both sides aren't really referring to the same phenomenon. The source of the disagreement is just using the same word for different, but related, "objects". This is something that seems obvious, but the kind of thing you only realize in retrospect, and I think I'm much better equipped now to be aware of it in real time.

I recommend giving it a try.


Yeah, the whole community side to rationality is, at best, questionable.

But the tools of thought that the literature describes are invaluable with one very important caveat.

The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.

It is an incredibly easy mistake to make. To make effective use of the tools, you need to become more humble than before you were using them or you just turn into an asshole who can't be reasoned with.

If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.


> The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.

Well said. Rationalism is about doing rationalism, not about being a rationalist.

Paul Graham was on the right track about that, though seemingly for different reasons (referring to "Keep Your Identity Small").

> If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.

On the other hand, success is supposed to look exactly like actually being right more often.


> success is supposed to look exactly like actually being right more often.

I agree with this, and I don't think it's at odds with what I said. The point is to never stop sincerely believing you could be wrong. That you are right more often is exactly why it's such an easy trap to fall into. The tools of rationality only help as long as you are actively applying them, which requires a certain amount of humility, even in the face of success.


Chapter 67. https://www.readthesequences.com/Knowing-About-Biases-Can-Hu... (And since it's in the book, and people know about it, obviously they're not doing it themselves.)


Also the Valley of Bad Rationality tag. https://www.lesswrong.com/w/valley-of-bad-rationality


Also that the Art needs to be about something else than itself, and a dozen different things. This failure mode is well known in the community; Eliezer wrote about it to death, and so did others.


To no avail, alas. But this is why we now see a thought leader publish a piece to say this is a thing it's now permissible not to be, indeed never to have been at all.


This reminds me of undergrad philosophy courses. After the intro logic/critical thinking course, some students can't resist seeing affirming the antecedent and post hoc fallacies everywhere (even if more are imagined than not).


> The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.

It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight".

And in reality, it's just a bunch of "grown teenagers" posting their pet theories online and thinking themselves "big thinkers".


> you just know they actually mean "MoreRight".

I'm not affiliated with the rationalist community, but I always interpreted "Less Wrong" as word-play on how "being right" is an absolute binary: you can either be right, or not be right, while "being wrong" can cover a very large gradient.

I expect the community wanted to emphasize how people employing the specific kind of Bayesian iterative reasoning they were proselytizing would arrive at slightly lesser degrees of wrong than the other kinds that "normal" people would use.

If I'm right, your assertion wouldn't be totally inaccurate, but I think it might be missing the actual point.


> I always interpreted "Less Wrong" as word-play on how "being right" is an absolute binary

Specifically (AFAIK) a reference to Asimov’s description[1] of the idea:

> [W]hen people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

[1] https://skepticalinquirer.org/1989/10/the-relativity-of-wron...


Cool, I didn't know the quote, nor that it was inspiration for the name. Thank you.


It's not even about the quote, or Asimov.

"Less wrong" is a concept that has a lot of connotations that just automatically appear in your mind and help you. What you wrote "It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight"." isn't bad because of Asimov said so, or because you were unaware of a reference, but because it's just bad.


The person you are replying to didn't write what you claim they did. I wrote it.

What is it about this thread that makes people confused about who wrote what? It's already happened to two different commenters.


> I'm not affiliated with the rationalist community, but I always interpreted "Less Wrong" as word-play on how "being right" is an absolute binary: you can either be right, nor not be right, while "being wrong" can cover a very large gradient.

I know that's what they mean at the surface level, but you just know it comes with a high degree of smugness and false modesty. "I only know that I know nothing" -- maybe, but they ain't no modern day Socrates, they are just a bunch of nerds going online with their thoughts.


Sometimes people enjoy being clever not because they want to rub it in your face that you're not, but because it's fun. I usually try not to take it personally when I don't get the joke and strive to do better next time.


That's mildly insulting of you.

I do get the joke; I think it's an instance of their feelings of "rational" superiority.

Assuming the other person didn't get the joke is very... irrational of you.


Like I said, I'm not trying to be a rationalist, or at least not this flavour of it. That being said, I apologise for the dig.


>but you just know it comes with a high degree of smugness and false modesty

No; I know no such thing, as I have no good reason to believe it, and plenty of countering evidence.


Very rational of you, but that's the problem with the whole system.

If you want to avoid thinking you're right all the time, it doesn't help to be clever and say the logical opposite. "Rationally" it should work, but it's bad because you're still thinking about it! It's like the thinking of a pink elephant thing.

Other approaches I recommend:

* try and fail to invest in stocks

* read Meaningness's https://metarationality.com

* print out this meme and put it on your wall https://imgflip.com/i/82h43h


>If you want to avoid thinking you're right all the time, it doesn't help to be clever and say the logical opposite.

I don't understand how this is supposed to be relevant here. You seem to be falsely accusing me of doing such a thing, or of being motivated by simple contrarianism.

Again, your claim was:

> but you just know it comes with a high degree of smugness and false modesty

Why should I "just know" any such thing? What is your reason for "just knowing" it? It comes across that you have simply decided to assume the worst of people that you don't understand.


It was my claim, not the person you're responding to now.

As to why I "just know": it's because I'm not a robot, I have experience at reading this kind of claims, and they usually mean what I think they mean.

"You just know" is an idiomatic expression, it's not meant to be dissected.


So much projection.


I don't think I'm more clever than the average person, nor have I made this my identity or created a whole tribe around it, nor do I attend nor host conferences around my cleverness, rationality, or weird sexual fetishes.

In other words: no.


Rationalism is not about trying to be clever it's very much about trying to be a little less wrong. Most people are not even trying, which includes myself. I don't write down my predictions, I don't keep a list of my errors. I just show up to work like everyone else and don't worry about it.

I really don't understand all the claims that they intellectually smug and overconfident when they are the one group of people trying to do better. It really seems like all the hatred is aimed at the hubris to even try to do better.


I think there is an arbitrage going on where STEM types who lack background in philosophy, literature, history are super impressed by basic ideas from those subjects being presented to them by stealth.

Not saying this is you, but these topics have been discussed for thousands of years, so it should at least be surprising that Yudkowsky is breaking new ground.


Are there other philosophy- or history-grounded sources that are comparable? If so, I’d love some recommendations. Yudkowsky and others have their problems, but their texts have an interesting points, are relatively easy to read and understand, and you can clearly see which real issues they’re addressing. From my experience, alternatives tend to fall into two categories: 1. Genuine classical philosophy, which is usually incredibly hard to read and after 50 pages I have no idea what the author is even talking about anymore. 2. Basically self help books that take one or very few idea and repeat them ad nouseam for 200 pages.


Likely the best resource to learn about philosophy is the Stanford Encyclopedia of Philosophy [0]. It's meant to provide a rigorous starting point for learning about a topic, where 1. you won't get bogged down in a giant tome on your first approach and 2. you have references for further reader.

Obviously, the SEP isn't perfect, but it's a great place to start. There's also the Internet Encyclopedia of Philosophy [1]; however, I find its articles to be more hit or miss.

[0] https://plato.stanford.edu

[1] https://iep.utm.edu


I've read Bertrand Russell's "A History of Western Philosophy" and it's the first ever philosophy book that I didn't drop after 10 pages, because of 2 things: 1- He's logic (or at least has the same STEM kind of logic that we use), so he builds his reasoning logically and not via bullshit associations like plays on words or contrived jumps. 2- He's not afraid to tell "this philosopher said that, it was an error", which is extremely new compared to other scholars who don't feel authorized to criticise even obvious errors. Really recommend!


I don't know if there's anything like a comprehensive high-level guide to philosophy that's any good, though of course there are college textbooks. If you want real/academic philosophy that's just more readable, I might suggest Eugene Thacker's "The Horror of Philosophy" series (starting with "In The Dust Of This Planet"), especially if you are a horror fan already.


It's not a nice response but I would say: don't be so lazy. Struggle through the hard stuff.

I say this as someone who had the opposite experience: I had a decent humanities education, but an abysmal mathematics education, and now I am tackling abstract mathematics myself. It's hard. I need to read sections of works multiple times. I need to sit down and try to work out the material for myself on paper.

Any impression that one discipline is easier than another probably just stems from the fact that you had good guides for the one and had the luck to learn it when your brain was really plastic. You can learn the other stuff too, just go in with the understanding that there's no royal road to philosophy just as there's no royal road to mathematics.


People are likely willing to struggle through hard stuff if the applications are obvious.

But if you can't even narrow the breadth of possible choices down to a few paths that can be traveled, you can't be surprised when people take the one that they know that's also easier with more immediate payoffs.


When you've read that passage in the math book twenty times, you eventually come to the conclusion that you understood it (even if in some rare cases you still didn't).

When "struggling through" a philosophy book, that doesn't happen in my experience. In fact, if you look up what others thought that passage means, you'll find no agreement among a bunch of people who "specialize" in authors who themselves "specialized" in the author you're reading. So reading that stuff I feel I have to accept that I will never understand what's written there and the whole exercise is just about "thinking about it for the sake of thinking". This might be "good for me" but it's really hard to keep up the motivation. Much harder than a math book.


I agree with the phenomenon you are talking about, but for mathematics, beyond calculation, the situation isn't really different (and no wonder since you'll quickly end up in the philosophy of mathematics).

You can take an entire mathematical theory on faith and learn to perform rote calculations in accordance with the structure of that theory. This might be of some comfort, since, accepting this, you can perform a procedure and see whether or not you got the correct result (but even this is a generous assumption in some sense). When you actually try to understand a theory and go beyond that to foundations, things become less certain. At some point you will accept things, but, unless you have enough time to work out every proof and then prove to yourself that the very idea of a proof calculus is sound, you will be taking something on faith.

I think if people struggle with doing the same thing with literature/philosophy, it's probably just because of a discomfort with ambiguity. In those realms, there is no operational calculus you can turn to to verify that, at least if you accept certain things on faith, other things must work out...expect there is! Logic lords over both domains. I think we just do a horrible job at teaching people how to approach literature logically. Yes, the subtle art of interpretation is always at play, but that's true of mathematics too and it is true of every representational/semiotic effort undertaken by human beings.

As for use, social wit and the ability to see things in new lights (devise new explanatory hypotheses) are both immediate applications of philosophy and literature, just like mathematics has its immediate operational applications in physics et al.


At the risk of being roasted for recommending pop-culture things, the podcast Philosophize This is pretty good for a high-level overview. I'm sure there are issues and simplifications, and it's certainly not actual source material. The nice part is it's sort of a start-to-finish, he goes from the start of philosophy to modern day stuff, which helps a ton in building foundational understanding without reading everything ever written.


I don't have an answer here either, but after suffering through the first few chapters of HPMOR, I've found that Yudk and others tech-bros posing as philosophers are basically like leaky, dumbed-down abstractions for core philosophical ideas. Just go to the source and read about utilitarianism and deontology directly. Yudk is like the Wix of web development - sure you can build websites but you're not gonna be a proper web developer unless you learn HTML, CSS and Javascript. Worst of all, crappy abstractions train you in some actively bad patterns that are hard to unlearn

It's almost offensive - are technologists so incapable of understanding philosophy that Yudk has to reduce it down to the least common denominator they are all familiar with - some fantasy world we read about as children?


I'd like what the original sources would have written if someone had fed them some speak-clearly pills. Yudkowsky and company may have the dumbing-down problem, but the original sources often have a clarity problem. (That's why people are still arguing about what they meant centuries later. Not just whether they were right - though they argue about that too - but what they meant.)

Even better, I'd like some filtering out of the parts that are clearly wrong.


Have you considered that the ideas that philosophy discusses are very complex, and words are not a sufficient medium to describe those ideas?

That's why you have people arguing over what someone meant. Dumbing it down or trying to write something unambiguous doesn't actually make it better.


I will grant you that the ideas are complex. But if words are not a sufficient medium, then we can't think clearly about the ideas.

Original philosophers have the right to define their own terms. If they can't define them clearly, then they probably aren't thinking clearly enough about their ideas to be able to talk to the rest of us about them. (Unless they consider it more important to sound impressive and hard to understand. But if that's the case, we can say "wow, you sound impressive" and then ignore them.)


I think this accurately channels Paul Graham's attempt to divide the world into "science/tech" and "illegible". But it's a little ridiculous to also divide the world into, "things I understand" and "things I don't", and state that anyone who speaks of the latter should not be permitted to talk to "the rest of us" until they figure out how to move themselves into your "things I understand" category.

The top scientists in AI can't explain how their models make certain decisions (at least not deterministically). Computer code is notoriously gibberish to outsiders. 90% of programmers probably couldn't explain what their job is to people outside of the field. If they can't explain it clearly, should they also be forbidden from speaking publicly until they can?

Is it possible that you lack the background to understand philosophy, and thus philosophers should rightly ignore your demands to dumb down their own field? Why should philosophers even appeal to people like you, when you seem so uninterested in even learning the basics of their field?


No, I don't regard all of philosophy (or all of non-science) as "illegible".

Nor do I regard the line as being "things I understand". I'm not (usually) that arrogant. But if, say, even other computer programmers can't tell for sure what you're saying, the problem is probably you.


HPMOR is not supposed to be rigorous. It’s supposed to be entertaining in a way that rigorous philosophy is not. You could make the same argument about any of Camus’ novels but again that would miss the point. If you want something more rigorous yudkowsky has it, bit surprising to me to complain he isn’t rigorous without talking about his rigorous work.


I really have no interest in reading any more of that guy’s “work”


Totally fair but why comment on something you arent interested in understanding


In AI finetuning, there's a theory that the model already contains the right ideas and skills, and the finetuning just raises them to prominence. Similarly in philosophic pedagogy, there's huge value in taking ideas that are correct but unintuitive and maybe have 30% buy-in and saying "actually, this is obviously correct, also here's an analysis of why you wouldn't believe it anyway and how you have to think to become able to believe it". That's most of what the Sequences are: they take from every field of philosophy the ideas that are actually correct, and say "okay actually, we don't need to debate this anymore, this just seems to be the truth because so-and-so." (Though the comments section vociferously disagrees.)

And it turns out if you do this, you can discard 90% of philosophy as historical detritus. You're still taking ideas from philosophy, but which ideas matters, and how you present them matters. The massive advantage of the Sequences is they have justified and well-defended confidence where appropriate. And if you manage to pick the right answers again and again, you get a system that actually hangs together, and IMO it's to philosophy's detriment that it doesn't do this itself much more aggressively.

For instance, 60% of philosophers are compatibilists. Compatibilism is really obviously correct. "What are you complaining about, that's a majority, isn't that good?" What is wrong with those 40% though? If you're in those 40%, what arguments may convince you? Repeat to taste.


Additional note: compatibilism is only obviously correct if you accept that "free will" actually just means "the experienced perception/illusion of free will" as described by Schopenhauer.

Using a slightly different definition of free will, suddenly Compatibilism becomes obviously incorrect.

And now it's been reduced to quibbling over definitions, thereby reinventing much of the history of philosophy.


I think free will as philosophically used is inherently self-defeating and one of the largest black marks on the entire field, to be fair.


Why is that?

Here's what we know:

- we appear to experience what we call free will from our own perspective. This isn't strong evidence obviously. - we are aware that we live in a world full of predictable mechanisms of varying levels of complexity, as well as fundamentally unpredictable mechanisms like quantum mechanics. - we know we are currently unable to fully model our experience and predict next steps. - we know that we don't know whether consciousness as an emergent property of our brains is fully rooted in predictable mechanisms or has some decree of unknowability to it.

So really "do we have free will" is a question that relies on the nature of consciousness.


No, I disagree with this conclusion. The problem is very much solvable if one simply keeps the map/territory split in mind, and for every thing asks himself, "am I perceiving reality or am I perceiving a property of my brain?" That is, we "experience free will" - this is to say, our brain reports to us that it evaluated multiple possible behaviors and chose one. However, this does not indicate that multiple behaviors were physically possible, it only indicates that multiple behaviors were cognitively evaluated. In fact, because any deciding algorithm has to evaluate a behavior list or even a behavior tree, there is no reason at all to expect this to have any connection to physical properties of the world, such as quantum mechanics.

(The relevant LessWrong sequence is "How An Algorithm Feels From Inside" https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-alg... which has nothing to do with free will, but does make very salient the idea that perceptions may be facts about your cognition as easily, if not more easily, as facts about reality.)

And when you have that view in mind, you can ask: "wait, why would the brain be sensitive to quantum physics? It seems to be a system extremely poorly suited to doing macroscopic quantum calculations." Once the alternative theory of "free will is a perception of your cognitive algorithm" is salient, you will notice that the entire free-will debate will begin to feel more and more pointless, until eventually you no longer understand why people think this is a big deal at all, and then it all feels rather silly.


> However, this does not indicate that multiple behaviors were physically possible

Okay, fine, but what indicates that multiple behaviors were not physically possible?

Our consciousnesses are emergent properties of networks of microscopic cells, and of chemicals moving around those cells at a molecular level. It seems perfectly reasonable that our consciousness itself could be subject to quantum effects that belie determinism, because it operates at a scale where those effects are noticable.


> Okay, fine, but what indicates that multiple behaviors were not physically possible?

I don't follow. Whether multiple behaviors are possible or not possible, you have to demonstrate that the human feeling of free-will is about that; you have to demonstrate that the human brain somehow measures actual possibility. Alternatively, you have to show that the human cognitive decision algorithm is unimplementable in either of those universes. Otherwise, it's simply much more plausible that the human feeling of freedom measures something about human cognition rather than reality, because brains in general usually measure things around the scale of brains, not non-counterfactual facts about the basic physical laws.


Well, no, your hypothesis is not automatically the null hypothesis that's true unless someone else goes through all goalposts regardless of where you move them to.

I know you thought about it for a moment, and therefore had an obvious insight that 40% of the profession has somehow missed (just define terms so to mean things that would make you correct, and declare yourself right! Easy!) but it's not quite that simple.

Your argument that you just made basically boils down to "well I don't think it works that way even though no one knows. But also it's obvious and I'm going to arbitrarily assign probabilities to things and declare certain things likely, baselessly".

If you read elsewhere in this thread then you might find that exact approach being lampooned :-)


Okay, you know what?

I'll let my argument stand as written, and you can let yours stand as written, and we'll see which one is more convincing. I don't feel like I have any need to add anything.

edit: Other than, I guess, that this mode of argument not being there is what made LessWrong attractive. "But what's the actual answer?!"


The attraction of LessWrong is that they take unanswerable questions with unknowable answers and assign an "actual answer" to them?

That my friend is a religion.


The attraction is that they say "actually, this has an answer, and I can show you why" and then they actually do so.

Philosophy is over-attached to the questions to the point of rejecting a commitment to an answer when it stares them in the face. The point of the whole entire shebang was to find out what the right answer was. All else is distraction, and philosophy has a lot of distraction.


> and I can show you why

But you haven't, you've just said "I have decided that proposition X is more likely than proposition Y, and if we accept X as truth then Z is the answer".

You've not shown that X is more likely than Y, and you have certainly not shown that it must be X and not Y.

Your statements don't logically follow. You said:

> it's simply much more plausible that the human feeling of freedom measures something about human cognition rather than reality

You said your opinion about some probabilities, and somehow drew the conclusion that it was "obvious that 40% of a field's practitioners are wrong".

Someone saying "actually, this has an answer, and I can show you why" to a currently fundamentally unanswerable question is simply going off faith and is literally a religion. It's choosing to believe in the downstream implication despite no actual foundation existing.


There isn't a single philosophical meaning. You are probably thinking of libertarian free will..furthering Obviously false because determinism isn't obviously true.


> And it turns out if you do this, you can discard 90% of philosophy as historical detritus

This is just the story of the history of philosophy. Going back hundreds of years. See Kant and Hegel for notable examples.


Sure, and I agree that LW is doing philosophy in that sense.


Sure, I just object to the characterisation of "actually correct", as though each of those ideas has not gone back and forth on philosophers thinking that particular idea is "actually correct" for centuries. LW does not appear to have much if any novel insight; just much better marketing.


I think philosophy has gone back and forth for so long that they're now, as a field, pathologically afraid of actually committing.

The AI connection with LessWrong means that the whole thing is framed with a backdrop of "how would you actually construct a mind?" That means you can't just chew on the questions, you have to actually commit to an answer and run the risk of being wrong.

This teaches you two things: 1. How to figure what you actually believe the answer is, and why, and make sure that this is the best answer that you can give; 2. how to keep moving when you notice that you made a misstep in part 1.


>actually, this is obviously correct

Nobody know what's actually correct, because you have to solve epistemology first, and you have to solve epistemology to solve epistemology...etc.etc.

>And it turns out if you do this, you can discard 90% of philosophy as historical detritus

Nope. For instance , many of the issues Kant raised are still live.

>The massive advantage of the Sequences is they have justified and well-defended confidence

Nope. That would entail answering objections , which EY doesn't stoop to.

>Compatibilism is really obviously correct

Nope. It depends on a semantic issue , what free will means.


To the Stem-enlightened mind, the classical understanding and pedagogy of such ideas is underwhelming, vague, and riddled with language-game problems, compared to the precision a mathematically-rooted idea has.

They're rederiving all this stuff not out of obstinacy, but because they prefer it. I don't really identify with rationalism per se, but I'm with them on this--the humanities are over-cooked and a humanity education tends to be a tedious slog through outmoded ideas divorced from reality


If you contextualise the outmoded ideas as part of the Great Conversation [1], and the story of how we reached our current understanding, rather than objective statements of fact, then they becomes a lot more valuable and worthy of study.

[1] https://en.wikipedia.org/wiki/Great_Conversation


But isn't the content of LessWrong part of the Great Conversation too?


I have kids in high school. We sometimes talk about the difference between the black and white of math or science, and the wishy washy grey of the humanities.

You can be right or wrong in math. You have can an opinion in English.


You can be right or wrong in math and philosophy. You have can an opinion in any other sciences, physics, chemistry, biology, medical sciences, history, you name it.


I suggest teaching them about the Problem of Induction [1].

Scientific thinking is not the same as mathematical thinking and it becomes quite wishy washy grey if you zoom in too far!

[1] https://plato.stanford.edu/entries/induction-problem/


You can definitely have wrong opinions in the humanities.


Rationalism largely rejects continental philosophy in favor of a more analytic approach. Yes these ideas are not new, but they’re not really the mainstream stuff you’d see in philosophy, literature, or history studies. You’d have to seek out these classes specifically to find them.


Analytical philosophy is rationalism done right.


They largely reject analytic philosophy as well. Austin and Whitehead are roughly as detestable to a Rationalist as Foucault and Marx.

Carlyle, Chesterton and Thoreau are about the limit of their philosophical knowledge base.


I don't claim that his work is original (the AI related probably is, but it's just tangentially related to rationalism), but it's clearly presented and is practical.

And, BTW, I could just be ignorant in a lot of these topics, I take no offense in that. Still I think most people can learn something from an unprejudiced reading.


I think you’re mostly right.

But also that it isn’t what the Yudkowsky is (was?) trying to do with it. I think he’s trying to distill useful tools which increase baseline rationality. Religions have this. It’s what the original philosophers are missing. (At least as taught, happy to hear counter examples)


I think I'd rather subscribe to an actual religion, than listen to these weird rationalist types of people who seem to have solved the problem that is "everything". At least there is some interesting history to learn about with religion


I would too if I could but organized religions make me uncomfortable even though I admire parts of them. Similar to my admiration you don’t need to like the rationality types or believe in their program to find one or more of their tools useful.

I’ll also respond to the silent downvoters apparent disagreement. CFAR holds workshops and a summer camp for teaching rationality tools. In HPMoR Harry discusses the way he thinks and why. I read it as more of a way to discuss EY’s views in fiction as much as fiction itself.


  For example, I recall being in lot of arguments that are purely "semantical" in nature.
I believe this is what Wittgenstein called “language games”


In spirit of playing said game, I believe you can just use the word "pedantic" these days.


Your time would probably be better spent reading his magnum opus, Harry Potter and the Methods of Rationality.

https://hpmor.com/


Sounds close to Yuval's book nexus which talks about the history of information gathering


If you're in it just to figure out the core argument for why artificial intelligence is dangerous, please consider reading the first few chapters of Nick Bostom's Superintelligence instead. You'll get a lot more bang for your buck that way.


Regarding the o3 "trick", if I understand it correctly, I'm trying to do something similar with the "MCP Super Assistant" paired with the DesktopCommander MCP.

I still haven't managed to make the mcp proxy server reliable enough, but looks promising. If it works, the model would have pretty direct access to the codebase, although any tool call is requires a new chat box.

I guess aider in copy-paste mode would be another solution for cheapskates like myself (not a dev and I barely do any programming, but I like to tinker).


I don't think they are comparable. MCPShell is a go program to run shell scripts, while the other one allows to define MCP operations as bash functions.

Not quite the same. The bash sdk can't be used to run arbitrary shell commands any more than to run arbitrary python programs.


There are fine tunes for that, sure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: