Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why immigration research is probably biased (laurenzguenther.substack.com)
27 points by paulpauper 10 days ago | hide | past | favorite | 13 comments




The trouble is that they successfuly made people believe social and human science are as reliable as hard science, using statistical data as mathematical proof.

My wife works in University, they are the most left leaning people I ever saw, but are afflicted by their own cognitive incoherence. For exemple they are against student visa fraud, but don't alert that the student never came as they don't want to be the one responsible. They just hope there are some right leaning people like my wife who will pull the trigger.


There are a lot of common sense ideas around migration that don't bear out in reality.

For example, in the article they start with a survey question:

> (“Do you think that, in your current country of residence, laws on immigration of foreigners should be relaxed or made tougher?”; 7-point scale).

I don't think this is a good question. Consider the Brexit Paradox: more strict immigration policy often increases the immigration rate because foreign workers decide to move into the country permanently. As they risk losing their access to wel paying jobs otherwise. Conversely, relaxing these policies could actually result in less outgroup culture being imported. And yet such a more relaxed policy would be labeled as "pro (im)migration".

The book "How Migration Really Works" by Hein De Haas was a good read. I didn't find it biased or partisan. It slaughters a lot of left and right wing sacred cows. Made me realize how much political time and effort gets wasted on things that won't work.


> Consider the Brexit Paradox: more strict immigration policy often increases the immigration rate because foreign workers decide to move into the country permanently

That's not a paradox. British immigration policy post-Brexit was loosened enormously.


Reading the book right now, thanks for the rec!

Do me a favour and add some more to the current Ask HN on books thread please:

https://news.ycombinator.com/item?id=46391572


Statistics about humans only work if the population equals the sample, if every respondent tells the truth, and if the quantitative definitions are correctly specified.

If any one of these fails, the meaning collapses.


In subjects where you never had to put your credibility where your mouth is - the data says whatever you’d like it to say.

Not to butcher Karl Popper early on Monday morning, but a very good guideline for whether a subject area is scientific is if the prediction it makes are falsifiable. If I propose a theory then I should be able to tell you which test result(s) would prove theory wrong.(I know there are critiques of Popper and falsifiability so I’m not presenting it as the be-all-and-end-all of scientific-ness, just a useful yard stick.)


The problem with that heuristic is more basic than Popperian philosophy. It's that researchers are happy to present falsifiable claims that are in fact then later falsified, but the falsifications don't get published by journals or advertised by the press, and university employers don't care that their employees are making false claims.

POSIWID (purpose of system is what it does)

Many-analysts results don’t prove bias; they often show the effect is under identified and analyst choices flip sign. Best fix: preregistration + multiverse/sensitivity reporting. Public preferences ≠ measured welfare-support effects.

Good article except for this bit:

> In general, hard sciences are much more reliable than social sciences because standards are higher and topics are less emotional.

Having read lots of both, I'm not sure that's true. There's no way to prove it because nobody has clear definitions for what the words hard, social, science, standards or reliable mean. But the extreme political bias doesn't go away, researcher degrees of freedom are just as large, the topics are sometimes much more emotional, and a lot of fields you'd expect to be hard are methodologically no different to any social science.

For example, a guy in Wales recently claimed a big payout from a fraud suit he won against the Dana Farber cancer center at Harvard. It'd been publishing fraudulent papers for years, yet either nobody noticed or nobody cared. Climatology tells people that the end is nigh, a message sufficiently distressing to make some psychologically vulnerable people commit suicide. Do any social sciences have an emotional effect that extreme? A lot of the COVID pseudo-science was specifically designed to manipulate people's emotions (e.g. journals rejecting correct papers because fewer people might take vaccines as a result [1]). And epidemiology isn't based on any empirical understanding of viruses or disease. It's just modelling no different to the type described in the article.

Unfortunately, ideology and bad incentives are the same no matter what field you look at. There is a hard/soft distinction to be made, but it's more about how close the field is to engineering. Engineering fields have a lot of movement between public and private sector, which keeps the universities a bit more honest. Maybe other fields like law and finance are the same, I don't know, I never read papers in those.

[1] https://x.com/mgmgomes1/status/1291162360657453056


> A lot of the COVID pseudo-science was specifically designed to manipulate people's emotions (e.g. journals rejecting correct papers because fewer people might take vaccines as a result [1]).

1. What Covid pseudo science are you referring to?

2. How does the x thread you referenced show “journals rejecting correct papers because fewer people might take vaccines as a result”? As I read it, the rejection was because the journal had a higher bar for evidence on claims around herd immunity than the researchers were able to meet. That is, they’d happily publish papers suggesting lower levels of herd immunity, but the papers would require more rigour than that which was provided.

It’s not clear to me that this bar existed just to increase vaccine uptake, but to generally avoid moves towards relaxing interventions based on insufficiently strong evidence (social distancing, mask wearing, etc).

It’s not clear to me what objection one would have to having a high threshold for evidence when publishing research with such high potential to impact public health. That seems like a good thing to me.


It's damning for academia that on HN of all places, a forum filled with academics, this stuff has to be spelled out by outsiders. Once more unto the breach?

The rejection reason: "it is appropriate to hold claims around the herd immunity threshold to a very high evidence bar, as these would be interpreted to justify relaxation of interventions, potentially placing people at risk."

What should scientific institutions do? Answer questions about the natural world honestly.

What should politicians do? Make policy tradeoffs informed by those answers and the preferences of the public.

What were the institutions actually doing? Hiding correct answers due to an ideological and pathologically strong preference for risk avoidance without any regard for cost.

> they’d happily publish papers suggesting lower levels of herd immunity, but the papers would require more rigour than that which was provided.

A "very high evidence bar" that isn't described anywhere is one that doesn't actually exist. They never showed anything wrong with the science, they rejected it out of hand because they didn't like that it would support a libertarian conclusion. Even if they had invented such a bar, requiring weak evidence for policies you like and strong evidence for policies you dislike is exactly the kind of pseudo-scientific intellectual fraud being criticized in the article.

COVID was full of this stuff. That event wasn't surprising, it was typical behavior of most "scientists".

You have to realize, that when the universities get disestablished, it'll be because of decades of discussions like this one. The correct response to that Twitter thread is to demand the journal be immediately shut down and that heads roll at the publisher. The wrong response is to say "of course we should block research that supports policies we dislike, why would anyone be against that?"


> A "very high evidence bar" that isn't described anywhere is one that doesn't actually exist

If your claim is: “if a standard isn’t explicitly written down somewhere such that one can utilise it to ascertain whether something meets it without ambiguity then it doesn’t exist” then the entailments are absurd. The world is full of standards that do not meet such a bar (“reasonable doubt”, for example) yet obviously this is a real standard.

> COVID was full of this stuff

Yet when I asked you for to give examples of “this stuff” you failed to do so. How come?

> when the universities get disestablished

Lmao. Not going to happen, I see we’re off in cloud cuckoo land now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: