One mistake you making is thinking that rationalists care more about people far away than people in their community. The reality is that they set the value of life the same for all.
If children around you are doing of an easily preventable disease, then yes, help them first! If they just need more arts programs, then you help the children dying in another country first.
That's not a mistake I'm making. Assuming you're talking about bog-standard effective altruists---by (claiming to) value the suffering of people far away as the same as those nearby, they're discounting the people around them heavily compared to other people. Compare to anyone else who values their friends and family and community far more than those far away. Perhaps they're not discounting them to less-than-parity---just less than they are for most people.
But anyway this whole model follows from a basic set of beliefs about quantifying suffering and about what one's ethical responsibilities are, and it answers those in ways most people would find very bizarre by turning them into a math problem that assigns no special responsibility to the people around you. I think that is much more contentious and gross to most people than EA thinks it is. It can be hard to say exactly why in words, but that doesn't make it less true.
To me, the non-local focus of EA/rationalism is, at least partially, a consequence of their historically unusual epistemology.
In college, I became a scale-dependent realist, which is to say, that I'm most confident of theories / knowledge in the 1-meter, 1-day, 1 m/s scales and increasingly skeptical of our understanding of things that are bigger/smaller, have longer/short timeframes, or faster velocities. Maybe there is a technical name for my position? But, it is mostly a skepticism about nearly unlimited extrapolation using brains that evolved under selection for reproduction at a certain scale. My position is not that we can't compute at different scales, but that we can't understand at other scales.
In practice, the rationalists appear to invert their confidence, with more confidence in quarks and light-years than daily experience.
> no special responsibility to the people around you
Musing on the different failure-directions: Pretty much any terrible present thing against people can be rationalized by arguing that one gadzillion distant/future people are more important. That includes religious versions, where the stakes of the holy war may presented as all of future humanity being doomed to infinite torment. There are even some cults that pitch it retroactively: Offer to the priesthood to save all your ancestors who are in hell because of original sin.
The opposite would be to prioritize the near and immediate, culminating in a despotic god-king. This is somewhat more-familiar, we may have more cultural experience and moral tools for detection and prevention.
A check on either process would be that the denigrated real/nearby humans revolt. :p
> they're discounting the people around them heavily compared to other people
This statement of yours makes no sense.
EAs by definition are attempting to remove the innate bias that discounts people far away by instead saying all lives are of equal worth.
>turning them into a math problem that assigns no special responsibility to the people around you
All lives are equal isn't a math problem. "Fuck it blow up the foreigners to keep oil prices low" is a math problem, it is a calculus that the US government has spent decades performing. (One that assigns zero value to lives outside the US.)
If $100 can save 1 life 10 blocks away from me or 5 lives in the next town over, what kind as asshole chooses to let 5 people die vs 1?
And since air travel is a thing, what the hell does "close to us" mean?
For that matter, from a purely selfish POV, helping lift other nations up to become fully advanced economies is hugely beneficial to me, and everyone on earth, in the long run. I'm damn thankful for all the aid my country gave to South Korea, the number of scientific advances that have come out of SK damn well paid for any tax dollars my grandparents paid on many orders of magnitude times over.
> It can be hard to say exactly why in words, but that doesn't make it less true.
This is the part where I shout racism.
Because history has shown it isn't about people being far or close in distance, but rather in how those people look.
Americans have shot down multiple social benefit programs because, and these are what people who voted against those programs directly said was their reasons "white people don't want black people getting the same help white people get."
Whites in America have voted, repeatedly, to keep themselves poor rather than lift themselves and black families out of poverty at the same time.
Of course Americans think helping people in Africa is "weird".
> If $100 can save 1 life 10 blocks away from me or 5 lives in the next town over, what kind as asshole chooses to let 5 people die vs 1?
The thing about strict-utilitarian-morality is that it can't comprehend any other kind of morality, because it evaluates the morality of... moralities... on its own utilitarian basis. And then of course it wins over the others: it's evaluating them using itself!
There are entirely different ethical systems that are not utilitarian which (it seems) most people hold and innately use (the "personal morality" I'm talking about in my earlier post). They are hard to comprehend rationally, but that doesn't make them less real. Strict-utilitarianism seems "correct" in a way that personal morality does not because you are working from a premise "only things that I can understand like math problems can be true". But what I observe in the world is that people's fear of the rationalist/EA mindset comes from the fact that they empirically find this way of thinking to be insidious. Their morality specifically disagrees with that way of thinking: it is not the case that truth comes from scrutable math problems; that is not the point of moral action to them.
The EA philosophy may be put as "well sure but you could change to the math-problem version, it's better". But what I observe is that people largely don't want to. There is a purpose to their choice of moral framework; it's not that they're looking at them all in a vacuum and picking the most mathematically sound one. They have an intrinsic need to keep the people around them safe and they're picking the one that does that best. EA on the other hand is great if everyone around you is safe and you have lots of extra spending money and what you're maximizing for is the feeling of being a good person. But it is not the only way to conceive of moral action, and if you think it is, you're too inside of it to see out.
I'll reiterate I am trying to describe what I see happening when people resist and protest rationalism (and why their complaints "miss" slightly---because IMO they don't have the language to talk about this stuff but they are still afraid of it). I'm sympathetic to EA largely, but I think it misses important things that are crippling it, of the variety above: an inability to recognize other people's moralities and needs and fears doesn't make them go away; it just makes them hate you.
> The thing about strict-utilitarian-morality is that it can't comprehend any other kind of morality, because it evaluates the morality of... moralities... on its own utilitarian basis.
I can comprehend them just fine, but I have a deep seated objection to any system of morality that leaves behind giant piles of dead bodies. We should be trying to minimize the size of the pile of dead bodies (and ideally eliminate the pile altogether!)
Any system or morality that boils down to "I don't care about that pile of dead bodies being huge because those people look different" is in fact not a system morality at all.
Well, you won't find anyone who disagrees with you here. No such morality is being discussed.
The job of a system of morality is to synthesize all the things we want to happen / want to prevent happening into a way of making decisions. One such thing is piles of dead bodies. Another is one's natural moral instincts, like their need to take care of their family, or the feeling of responsibility to invest time and energy into improving their future or their community or repairing justice or helping people who need help, or to attend to their needs for art and meaning and fun and love and respect. A coherent moral system synthesizes these all and figures out how much priority to allocate to each thing in a way that is reasonable and productive.
Any system of morality that takes one of these criteria and discards the rest of them is not a system of morality at all, in the very literal sense that nobody will do it. Most people won't sell out one of their moral impulses for the others, and EA/rationalism feels like it asks them too, since it asks them to place zero value in a lot of things that they inherently place moral value in, and so they find it creepy and weird. (It doesn't ask that explicitly; it asks it by omission. By never considering any other morality and being incapable of considering them, because they are not easily quantifiable/made logical, it asks you to accept a framework that sets you up to ignore most of your needs.)
My angle here is that I'm trying to describe what I believe is already happening. I'm not advocating it; it's already there, like a law of physics.
Perhaps part of it is that local action can often be an order of magnitude more impactful than the “equivalent” action at a distance. If you volunteer in your local community, you not only have fine-grained control over the benefit you bestow, you also know for a fact that you’re doing good. Giving to a charity that addresses an issue on the other side of the world doesn’t afford this level of control, nor this level of certainty. For all you know most of the donation is being embezzled.
I think another part of it is a sort of healthy nativism or in-group preference or whatever you want to call it. It rubs people the wrong way when you say that you care about someone in a different country as much as you care about your neighbors. That’s just…antisocial. Taken to its logical conclusion, a “rationalist” should not only donate all of their disposable income to global charities, they should also find a way to steal as much as possible from their neighbors and donate that, too. After all, those. Holden in Africa need the money much more than their pampered western neighbors.
If children around you are doing of an easily preventable disease, then yes, help them first! If they just need more arts programs, then you help the children dying in another country first.