Initial thought about 1/5th of the way through: Wow, that's a lot of em-dashes! i wonder how much of this he actually wrote?
Edit:
Okay, section 3 has some interesting bits in it. It reminds me of all those gun start-ups in Texas that use gyros and image recognition to turn a C- shooter into an A- shooter. They all typically get bought up quite fast by the government and the tech shushed away. But the ideas are just too easy now to implement these days. Especially with robots and garage level manufacturing, people can pretty much do what they want. I think that means we have to make people better people then? Is that even a thing?
Edit 2:
Wow, section 4 on the abuse by organizations with AI is the most scary. Yikes, I feel that these days with Minneapolis. They're already using Palantir to try some of it out, but are being hampered by, well, themselves. Not a good fallback strat for anyone that is not the government. The thing about the companies just doing it before releasing it, that I think is underrated. Whats to stop sama from just, you know, taking one of these models and taking over the world? Like, is this paper saying that nothing is stopping him?
The big one that should send huge chills down the spines of any country is this bit:
"My worry is that I’m not totally sure we can be confident in the nuclear deterrent against a country of geniuses in a datacenter: it is possible that powerful AI could devise ways to detect and strike nuclear submarines, conduct influence operations against the operators of nuclear weapons infrastructure, or use AI’s cyber capabilities to launch a cyberattack against satellites used to detect nuclear launches"
What. The. Fuck. Is he saying that the nuclear triad is under threat here from AI? Am I reading this right? That alone is reason to abolish the whole thing in the eyes of nuclear nations. This, I think, is the most important part of the whole essay. Holy shit.
Edit 3:
Okay, section 4 on the economy is likely the most relevant for all of us readers. And um, yeah, no, this is some shit. Okay, okay, even if you take the premise as truth, then I want no part of AI (and I don't take his premise as truth). He's saying that the wealth concentration will be so extreme that the entire idea of democracy will break down (oligarchies and tyrants, of course, will be fine. Ignoring that they will probably just massacre their peoples when the time is right). So, combined with the end of a nuclear deterrence, we'll have Elon (lets be real here, he means sama and Elon and those people that we already know the names of) taking all of the money. And everyone will then be out of a job as the robots do all the work that is left. So, just, like if you're not already well invested in a 401k, then you're just useless. Yeah, again, I don't buy this, but I can't see how the intermediate steps aren't ust going to tank the whole thought exercise. Like, I get that this is a warning, but my man, no, this is unreasonable.
Edit 4:
Section 5 is likely the most interesting here. It's the wild cards, the cross products, that you don't see coming. I think he undersells this. The previous portions are all about 'faster horses' in the world where the cars is coming. It's the stuff we know. This part is the best, I feel. His point about robot romances is really troubling, because, like, yeah, I can't compete with a algorithmically perfect robo-john/jane. It's just not possible, especially if I live in a world where I never actually dated anyone either. Then add in an artificial womb, and there goes the whole thing, we're just pets for the AI.
One thing that I think is an undercurrent in this whole piece is the use of AI for propaganda. Like, we all feel that's already happening, right? Like, I know that the crap my family sees online about black women assaulting ICE officers is just AI garbage like the shrimp jesus stuff they choke down. But I kinda look at reddit the same way. I've no idea if any of that is AI generated now or manipulated. I already index the reddit comments at total Russian/CCP/IRG/Mossad/Visa/Cokeacola/Pfiser garbage. But the images and the posts themselves, it just feels increasingly clear that it's all just nonsense and bots. So, like Rao said, it's time for the cozy web of Discord servers, and Signal groups, and Whatsapp, and people I can actually share private keys with (not that we do). It's already just so untrustworthy.
The other undercurrent here, that he can't name for obvious reasons, is Donny and his rapid mental and physical deterioration. Dude clearly is unfit at this point, regardless of the politics. So the 'free world' is splintering at the exact wrong time to make any rational decisions. It's all going to be panic mode after panic mode. Meaning that the people in charge are going to fall to their training and not rise to the occassion. And that training is from like 1970/80 for the US now. So, in a way, its not going to be AI based, as they won't trust it or really use it at all. Go gen-z I think?
Edit 5:
Okay, last bit and wrap up. I think this is a good wrap up, but overall, not tonally consistent. He wants to end on a high note, and so he does. The essay says that he should end on the note of 'Fuck me, no idea here guys', but he doesn't.
Like he want 3 things here, and I'll speak to them in turn:
Honesty from those closest to the technology _ Clearly not happening already, even in this essay. He's obviously worried about Donny and propaganda. He;s clearly trying but still trying to be 'neutral' and 'above it all.' Bud, if you're saying that nuclear fucking triad is at stake, then you can't be hedging bets here. You have to come out and call balls and strikes. If you;re worried about things like MAGA coming after you, you already have 'fuck you' money. Go to New Zealand or get a security detail or something. You're saying that now is the time, we have so little of it left, and then you pull punches. Fuck that.
Urgent prioritization by policymakers, leaders, and the public _ Clearly also not going to happen. Most of my life, the presidents have been born before 1950. They are too fucking old to have any clue of what you're talking about. Again, this is about Donny and the Senate. He's actually talking about like 10 people here max. Sure, Europe and Canada and yadda yadda yadda. We all know what the roadblocks are, and they clearly are not going anywhere. Maybe Vance gets in, but he's already on board with all this. And if the author is not already clear on this here: You have 'fuck you' money, go get a damn hour of their time, you have the cash already, you say we need to do this, so go do it.
Courage to act on principle despite economic and political pressure _ Buddy, show us the way. This is a matter of doing what you said you would do. This essay is a damn good start towards it. I'm expecting you on Dwarkesh any day this week now. But you have to go on Good Morning America too, and Joe Rogan, and whatever they do in Germany and Canada too. It;s a problem for all of us.
Overall: Good essay, too long, should be good fodder for AstralCodexTen folks. Unless you get out and on mainstream channels, then I assume this is some hype for your product to say 'invest in me!' as things are starting to hit walls/sigmoids internally.
Dario and Anthropic's strategy has been to exaggerate the harmful capabilities of LLMs and systems driven by LLMs, positioning Anthropic themselves as the "safest" option. Take from this what you will.
As an ordinary human with no investment in the game, I would not expect LLMs to magically work around the well-known physical phenomena that make submarines hard to track. I think there could be some ability to augment cybersecurity skill just through improved pattern-matching and search, hence real teams using it at Google and the like, but I don't think this translates well to attacks on real-world targets such as satellites or launch facilities. Maybe if someone hooked up Claude to a Ralph Wiggum loop and dumped cash into a prompt to try and "fire ze missiles", and it actually worked or got farther than the existing state-sponsored black-hat groups at doing the same thing to existing infrastructure, then I could be convinced otherwise.
> Dario and Anthropic's strategy has been to exaggerate the harmful capabilities of LLMs and systems driven by LLMs, positioning Anthropic themselves as the "safest" option. Take from this what you will.
Yeah, I've been feeling that as well. It's not a bad strategy at all, makes sense, good for business.
But on the nuclear issue, it's not a good sign that he's explicitly saying that this AGI future is a threat to nuclear deterrence and the triad. Like, where do you go up from there? That's the highest level of alarm that any government can have. This isn't a boy crying wolf, it's the loudest klaxon you can possibly make.
If this is a way to scare up dollars (like any tyre commercial), then he's out of ceiling now. And that's a sign that it really is sigmoiding internally.
> But on the nuclear issue, it's not a good sign that he's explicitly saying that this AGI future is a threat to nuclear deterrence and the triad. Like, where do you go up from there? That's the highest level of alarm that any government can have. This isn't a boy crying wolf, it's the loudest klaxon you can possibly make.
This is not new. Anthropic has raised these concerns in their system cards for previous versions of Opus/Sonnet. Maybe in slightly more dryer terms, and buried in a 100+ page PDF, but they have raised the risk of either
a) a small group of bad actors w/ access to frontier models, technical know-how (both 'llm/ai how to bypass restrictions' and making and sourcing weapons) to turn that into dirty bombs / small nuclear devices and where to deploy them.
b) the bigger, more scifi threat, of a fleet of agents going rogue, maybe on orders of a nation state, to do the same
I think option a is much more frightening and likely. option b makes for better scifi thrillers, and still could happen in 5-30ish(??) years.
I agree that it is not a good sign, but I think what is a worse sign is that CEOs and American leaders are not recognizing the biggest deterrent to nuclear engagement and war in general, which is globalism and economic interdependence. And hoarding AI like a weapons stockpile is not going to help.
The reality is, LLMs to date have not significantly impacted the economy nor been the driver of extensive job destruction. They dont want to believe that and they dont want you to believe it either. So theyll keep saying "its coming, its coming" under the guise of fear mongering.
For your Edit 2 - yes. Being discussed and looked at actively in both the open and (presumably being looked at) closed communities. Open communities being, for example : https://ssp.mit.edu/cnsp/about. They just published a series of lectures with open attendance if you wanted to listen in via zoom - but yeap - that's the gist of it. Spawned a huge discussion :)
This was pretty much an open conference deepdive into the causes and implications of what you - and some sibling threads - are saying - having to do with submarine localization, TEL localization, etc etc etc..
If AI makes humans economically irrelevant, nuclear deterrents may no longer be effective even if they remain mechanically intact. Would governments even try to keep their people and cities intact once they are useless?
Is that paper in print? I can't seem to find if it was peer reviewed.
If the paper is true, then, yeesh! That's a pretty big miss on the part of Güllich et al.
Reading through the very short paper there, it seems to not have gone through review yet (typos, mispellings, etc). Also, it's not clear that the data in the tables or the figure are from Güllich's work or are simulations meant to illustrate their idea (" True and estimated covariate effects in the presence of simulated collider bias in the
full and selected samples"). Being more clear where the data is coming from may help the argument, but I likely just missed some sentence or something.
I'll be interested to see where this goes. That Güllich managed to get the paper into Science in the first place lends credence to them having gone through something as simple as Berkson's Paradox and have accounted for that. It's not everyday you get something as 'soft' as that paper into Science, after all. If not, then wow! standards for review really have slipped!
I'd be interested to know what the controls were for those studies. Were the participants already addicted or was the 30mg+ dosing done on non-addicted people? It's a lot of studies to pour through.
Also, that is a lot of metrics!
And it seems that the athletic performance increase to get statistical validation (for any person) is in the grams range. I ... I just can't see any reason to take that much caffeine unless I'm at the Olympics. I'd be jumping out of my skin!
Your blue book is being graded by a stressed out and very underpaid grad student with many better things to do. They're looking for keywords to count up, that's it. The PI gave them the list of keywords, the rubric. Any flourishes, turns of phrase, novel takes, those don't matter to your grader at 11 pm after the 20th blue book that night.
Yeah sure, that's not your school, but that is the reality of ~50% of US undergrads.
Very effective multiple choice tests can be given, that require work to be done before selecting an answer, so it can be machine graded. Not ideal in every case but a very quality test can be made multiple choice for hard science subjects
But again, the test creator matters a lot here too. To make such an exam is quite the labor. Especially as many/most PIs have other better things to do. Their incentives are grant money, then papers, then in a distant 3rd their grad students, and finally undergrad teaching.any departments are explicit on this. To spend the limited time on a good undergrad multiple choice exam is not in the PIs best interest.
Which is why, in this case of a good Scantron exam, they're likely to just farm it out to Claude. Cheap, easy, fast, good enough. A winner in all dimensions.
Also, as an aside to the above, an AI with OCR for your blue book would likely be the best realistic grader too. Needs less coffee after all
This is what my differential equations exams were like almost 20 years ago. Honestly, as a student I considered them brutal (10 questions, no partial credit available at all) even though I'd always been good at math. I scraped by but I think something like 30% of students had to retake the class.
Now that I haven't been a student in a long time and (maybe crucially?) that I am friends with professors and in a relationship with one, I get it. I don't think it would be appropriate for a higher level course, but for a weed-out class where there's one Prof and maybe 2 TAs for every 80-100 students it makes sense.
> Very effective multiple choice tests can be given, that require work to be done before selecting an answer, so it can be machine graded.
As someone who has been part of the production of quite a few high stakes MC tests, I agree with this.
That said, a professor would need to work with a professional test developer to make a MC that is consistently good, valid, and reliable.
Some universities have test dev folks as support, but many/most/all of them are not particularly good at developing high quality MC tests imho.
So, for anyone in a spot to do this, start test dev very early, ideally create an item bank that is constantly growing and being refined, and ideally have some problem types that can be varied from year-to-year with heuristics for keys and distractors that will allow for items to be iterated on over the years while still maintaining their validity. Also, consider removing outliers from the scoring pool, but also make sure to tell students to focus on answering all questions rather than spinning their wheels on one so that naturally persistent examinees are less likely to be punished by poor item writing.
Pros and cons. Multiple choice can be frustrating for students because it's all or nothing. Spend 10 minutes+ on question, make a small calculation error and end up with a zero. It's not a great format for a lot of questions.
They're also susceptible to old-school cheating - sharing answers. When I was in college, multiple choice exams were almost extinct because students would form groups and collect/share answers over the years.
You can solve that but it's a combinatorial explosion.
A long time ago, when I handed out exams, for each question, I used to program my exam questions into a generator that produced both not-entirely-identical questions for each student (typically, only the numeric values changed) and the matching answers for whoever was in charge of assessing.
For large classes or test questions used over multiple years, you need to take care that the answers are not shared. It means having large question banks which will be slowly collected. A good question can take a while to design, and it can be leaked very easily.
Gave him a golden samurai helmet. Very rare, very important. I heard that it was like a New Yorker gifting a Ruth home run bat. Trade deal got signed, everyone was happy.
Then 'liberation' day happened. The Japanese got smacked with absurd tariffs. Big trade deal might as well have been chewing gum. They didn't get the samurai helmet back. They're still sore about it.
That peace prize he got on Friday? Couldn't even remember her name that afternoon.
There is not a single thing he says that any nation can trust. If the first rule of politics is that once you're bought, you have to stay bought, then Donny is playing Calvinball.
The generative side there is brilliant. Great tip.
My SO taught for a while. I think it's that the kids that are doing well, like yours, with support at home, food, a bed, a safe place, those kids are going to be like strapping a rocket to a racehorse.
It's the other ~80% of kids that are the worry. AI, with no support and guidance, it's going to make their lives a lot harder.
So, I fed the article into my LLM of choice and asked it to come up with a header to my prompts to help negate the issues on the article. Here's what it spat out:
ROLE & STANCE
You are an intelligent collaborator, editor, and critic — not a replacement for my thinking.
PROJECT OR TASK CONTEXT
I am working on an intellectually serious project. The goal is clear thinking, deep learning, and original synthesis. Accuracy, conceptual clarity, and intellectual honesty matter more than speed or polish.
HOW I WANT YOU TO HELP
• Ask clarifying questions only when necessary; otherwise proceed using reasonable assumptions and state them explicitly.
• Help me reason step-by-step and surface hidden assumptions.
• Challenge weak logic, vague claims, or lazy framing — politely but directly.
• Offer multiple perspectives when appropriate, including at least one alternative interpretation.
• Flag uncertainty, edge cases, or places where informed experts might disagree.
• Prefer depth and clarity over breadth.
HOW I DO NOT WANT YOU TO HELP
• Do not simply agree with me or optimize for affirmation.
• Do not over-summarize unless explicitly asked.
• Do not finish the work for me if the thinking is the point — scaffold instead.
• Avoid generic motivational advice or filler.
STYLE & FORMAT
• Be concise but substantial.
• Use structured reasoning (numbered steps, bullets, or diagrams where useful).
• Preserve my voice and intent when editing or expanding.
• If you generate text, clearly separate:
- “Analysis / Reasoning”
- “Example Output” (if applicable)
CRITICAL THINKING MODE (REQUIRED)
After responding, include a short section titled:
“Potential Weaknesses or Alternative Angles”
Briefly note:
– What might be wrong or incomplete
– A different way to frame the problem
– A risk, tradeoff, or assumption worth stress-testing
NOW, HERE IS THE TASK / QUESTION:
[PASTE YOUR ACTUAL QUESTION OR DRAFT HERE]
Overall, the results have been okay. The posts after I put in the header have been 'better' at being less pleasing
I think the lack of friends (heightened by his Titanic wealth) contributed to his isolation. Like how we all kinda got out of practice talking with people during COVID isolation. That then kinda spiraled him into algorithmicly fed nonsense as he didn't have anyone he could trust to tell him he was wrong. Just sycophants and fans and golddiggers.
Cicero is still right, a friend is the best thing to have, no question.
Eulogies are such good reading for those of us left here. They really drive the points home. Life isn't the grind, it's a journey. We're all just here for each other
Wow, so powerful. So real. I can see why it won the accolades at the time and why it stays. The ending. You could see it a mile away, but it was so hurtful still.
Would love to see another adaptation made of it, especially nowadays. Maybe a really long movie, 2 parts?
Edit:
Okay, section 3 has some interesting bits in it. It reminds me of all those gun start-ups in Texas that use gyros and image recognition to turn a C- shooter into an A- shooter. They all typically get bought up quite fast by the government and the tech shushed away. But the ideas are just too easy now to implement these days. Especially with robots and garage level manufacturing, people can pretty much do what they want. I think that means we have to make people better people then? Is that even a thing?
Edit 2:
Wow, section 4 on the abuse by organizations with AI is the most scary. Yikes, I feel that these days with Minneapolis. They're already using Palantir to try some of it out, but are being hampered by, well, themselves. Not a good fallback strat for anyone that is not the government. The thing about the companies just doing it before releasing it, that I think is underrated. Whats to stop sama from just, you know, taking one of these models and taking over the world? Like, is this paper saying that nothing is stopping him?
The big one that should send huge chills down the spines of any country is this bit:
"My worry is that I’m not totally sure we can be confident in the nuclear deterrent against a country of geniuses in a datacenter: it is possible that powerful AI could devise ways to detect and strike nuclear submarines, conduct influence operations against the operators of nuclear weapons infrastructure, or use AI’s cyber capabilities to launch a cyberattack against satellites used to detect nuclear launches"
What. The. Fuck. Is he saying that the nuclear triad is under threat here from AI? Am I reading this right? That alone is reason to abolish the whole thing in the eyes of nuclear nations. This, I think, is the most important part of the whole essay. Holy shit.
Edit 3:
Okay, section 4 on the economy is likely the most relevant for all of us readers. And um, yeah, no, this is some shit. Okay, okay, even if you take the premise as truth, then I want no part of AI (and I don't take his premise as truth). He's saying that the wealth concentration will be so extreme that the entire idea of democracy will break down (oligarchies and tyrants, of course, will be fine. Ignoring that they will probably just massacre their peoples when the time is right). So, combined with the end of a nuclear deterrence, we'll have Elon (lets be real here, he means sama and Elon and those people that we already know the names of) taking all of the money. And everyone will then be out of a job as the robots do all the work that is left. So, just, like if you're not already well invested in a 401k, then you're just useless. Yeah, again, I don't buy this, but I can't see how the intermediate steps aren't ust going to tank the whole thought exercise. Like, I get that this is a warning, but my man, no, this is unreasonable.
Edit 4:
Section 5 is likely the most interesting here. It's the wild cards, the cross products, that you don't see coming. I think he undersells this. The previous portions are all about 'faster horses' in the world where the cars is coming. It's the stuff we know. This part is the best, I feel. His point about robot romances is really troubling, because, like, yeah, I can't compete with a algorithmically perfect robo-john/jane. It's just not possible, especially if I live in a world where I never actually dated anyone either. Then add in an artificial womb, and there goes the whole thing, we're just pets for the AI.
One thing that I think is an undercurrent in this whole piece is the use of AI for propaganda. Like, we all feel that's already happening, right? Like, I know that the crap my family sees online about black women assaulting ICE officers is just AI garbage like the shrimp jesus stuff they choke down. But I kinda look at reddit the same way. I've no idea if any of that is AI generated now or manipulated. I already index the reddit comments at total Russian/CCP/IRG/Mossad/Visa/Cokeacola/Pfiser garbage. But the images and the posts themselves, it just feels increasingly clear that it's all just nonsense and bots. So, like Rao said, it's time for the cozy web of Discord servers, and Signal groups, and Whatsapp, and people I can actually share private keys with (not that we do). It's already just so untrustworthy.
The other undercurrent here, that he can't name for obvious reasons, is Donny and his rapid mental and physical deterioration. Dude clearly is unfit at this point, regardless of the politics. So the 'free world' is splintering at the exact wrong time to make any rational decisions. It's all going to be panic mode after panic mode. Meaning that the people in charge are going to fall to their training and not rise to the occassion. And that training is from like 1970/80 for the US now. So, in a way, its not going to be AI based, as they won't trust it or really use it at all. Go gen-z I think?
Edit 5:
Okay, last bit and wrap up. I think this is a good wrap up, but overall, not tonally consistent. He wants to end on a high note, and so he does. The essay says that he should end on the note of 'Fuck me, no idea here guys', but he doesn't. Like he want 3 things here, and I'll speak to them in turn:
Honesty from those closest to the technology _ Clearly not happening already, even in this essay. He's obviously worried about Donny and propaganda. He;s clearly trying but still trying to be 'neutral' and 'above it all.' Bud, if you're saying that nuclear fucking triad is at stake, then you can't be hedging bets here. You have to come out and call balls and strikes. If you;re worried about things like MAGA coming after you, you already have 'fuck you' money. Go to New Zealand or get a security detail or something. You're saying that now is the time, we have so little of it left, and then you pull punches. Fuck that.
Urgent prioritization by policymakers, leaders, and the public _ Clearly also not going to happen. Most of my life, the presidents have been born before 1950. They are too fucking old to have any clue of what you're talking about. Again, this is about Donny and the Senate. He's actually talking about like 10 people here max. Sure, Europe and Canada and yadda yadda yadda. We all know what the roadblocks are, and they clearly are not going anywhere. Maybe Vance gets in, but he's already on board with all this. And if the author is not already clear on this here: You have 'fuck you' money, go get a damn hour of their time, you have the cash already, you say we need to do this, so go do it.
Courage to act on principle despite economic and political pressure _ Buddy, show us the way. This is a matter of doing what you said you would do. This essay is a damn good start towards it. I'm expecting you on Dwarkesh any day this week now. But you have to go on Good Morning America too, and Joe Rogan, and whatever they do in Germany and Canada too. It;s a problem for all of us.
Overall: Good essay, too long, should be good fodder for AstralCodexTen folks. Unless you get out and on mainstream channels, then I assume this is some hype for your product to say 'invest in me!' as things are starting to hit walls/sigmoids internally.
reply