The dataset contains ~80% of food sold and inclusion in it is very probably skewed towards large volume. So the lower bound is something like 56% (if the 20% rest are not ultraprocessed)
Here it actually means 70%, but the paper is in a paper from mdpi which have been under criticism for predatory (i.e. fraudulent, junk-science enabling) practices.
From TFA:
"We report results of a cross-sectional assessment of the 2018 US packaged food and beverage supply by nutritional composition and indicators of healthfulness and level of processing. Data were obtained through Label Insight’s Open Data database, which represents >80% of all food and beverage products sold in the US over the past three years. Healthfulness and the level of processing, measured by the Health Star Rating (HSR) system and the NOVA classification framework, respectively, were compared across product categories and leading manufacturers. Among 230,156 food and beverage products, the mean HSR was 2.7 (standard deviation (SD) 1.4) from a possible maximum rating of 5.0, and 71% of products were classified as ultra-processed. "
Nitpick: The Kiel Institute is not a think tank in the sense that people would understand the word.
It is a federally funded research organization (part of the family of Leibniz institutes) similar to a university but without teaching. Here's a list of the others [1].
These are independent, high-quality research institutions without political money or a designated political agenda.
The fact that they don't have an agenda written into their charter doesn't mean they don't have an agenda. Basically every American news organization is an example of this.
While your assessment may be true in many contexts, this is not one of them.
American hyper polarization does not permeate other countries in the same degree and German academia is actually full of sober, level-headed, nuanced people.
This has never been the case across the history of humanity -- there has never existed a non-biased institution, and it's incredibly naive to think that a German think tank funded by the German government would be any exception.
"Level headed" and "sober" people are not immune to the effects of incentives and conflicts of interest. Researchers are dependent on grants, on invitations to conferences, etc., and so are liable to follow trends (tariffs bad), and p-hack to support the mainstream narrative (as they do in this analysis with P values > 0.01).
> German academia is actually full of sober, level-headed, nuanced people
Thanks for the laugh. I hope you realize how pretentious this sounds. In actuality, Germany's GDP is about 1/6th of that of the US, so these German academics don't sound very bright for how "level headed" and "sober" they are.
If/since bias is everywhere and implicit because a person is that person and their own experiences, why point it out here so explicitly????
You do not point that out every. Single. Time. somebody argues even though it is true, or do you? Because that is just too shallow, that is the basis, nothing can be below that, so there is no point in pointing to the ground every time. So when you do point it out, it is YOU who has an agenda.
No, just when Europeans (like yourself) need a reminder that they're not exceptional in their "level headedness" and "soberness", and in fact are anti-exceptional when it comes to real world outcomes like GDP.
I'm not a medial researcher, but my impression is that many fields find it difficult to produce the robust high-level risk comparisons that you ask about. I.e. if you're looking at blood fats, even there you'll find many complicated contextual factors (age, sex, ethnicity, type of lipids i.e. LDL or lp(a) or ...?). The same might be the case for sugar. So it's not really easy/cheap to combine detailed state-of-the-art measurements of different causes into one randomized controlled trial.
As for the effects of sugar, I think there's some evidence that's not too hard to find, e.g. some meta analyses showing something around 10% increase in dose-dependent risk (RR ~ 1.10) [1,2]. A lot of the literature seems to be focused on beverages, e.g. this comparative cross-national study [3].
If you have a randomized controlled trial, the sugar dose is varied and other confounding variables are controlled by randomization. So you measure the causal impact of sugar only. There are studies showing that.
With observational studies, if you have a dose-dependent effect, then that's good evidence (although not completely conclusive) of a causal relationship. This is what many studies do.
If you have a meta analysis covering many primary studies, and if those vary a lot of context (i.e. countries, year, composition of the population), and you still get a consistent effect, then that's another piece of support for a causal relationship.
The few studies that I've looked at seem to show a pretty robust picture of sugar being a cause, but there might be selection bias - i.e. we'd need an umbrella / meta meta study (which ideally accounts for publication bias) to get the best estimate possible.
Observational studies, and meta analyses relying on them, don't resolve the fundamental problem of causal inference. The best you can do without an experiment is a really clean natural experiment, but those are rare. It's hard to credibly establish a causal relationship without a robust experiment.
What are "the habits surrounding consumption of the beverage?" It's been my observation that soda drinkers drink soda all day, no matter what they are doing.
I get the motivation, but it honestly feels a bit weird to use tens of thousands of lines of python code to do something that you can just directly do in typst.
I mean, a CV is not really rocket science and there are quite a few great typst templates out there.
Point taken, but I'd prefer 200 lines of rocket science that I understand and control over 60k lines of (cleanly written and documented) rocket science.
(Although admittedly both plain typst and this project are still way less complex than LaTeX.)
It's always nice to see products that cater to the best users can be instead of the worst.
Personally, AI for writing is in the same corner as the other pathologies you've listed (popularity counts etc), so it's not for me. But some folks will see that differently.
I hear you, and I share a big part of that opinion on AI writing. From the feedback I got from some folks before, it seemed to be something many were interested in, which is why I designed it to be very aid-like, and less to generate content fully from scratch and post automatically: essentially spamming.
I am not very sold on the AI tool and am very open to removing it with more such opinions!
Yeah, it's been 5 years (almost 6) since python 2.7 stopped receiving security updates, but it does still run on modern OS's.
Looking at the list, I'm actually kind of surprised there aren't more CVEs for python 2.7, but if you're only running it locally or on an intranet I could see letting it ride.
I'm with you regarding the argument, but want to nitpick:
"dismissing" a politician sounds like an easy fix but we probably don't want hyper-polarized dismissal wars where politicians are "shot down" immediately after being elected. That's why there are other mechanisms such as not re-electing, public shaming, transparency fora etc. ... we need to work on strengthening those, the accountability and transparency.
reply