Hacker Newsnew | past | comments | ask | show | jobs | submit | sReinwald's commentslogin

One thing I have noticed on Mounjaro is a (at least subjectively) significant decrease in impulse spending / buying random crap off Amazon. I have ADHD and that has been a real problem for quite a while - even with ADHD medication (Elvanse/Vyvanse in my case).

The part of Mounjaro that regulates the craving side of the weight loss equation (like reducing 'food noise' and the desire for sugary and fatty foods) seems to also affect other behaviors due to Mounjaro's effect on the brain's reward circuitry. I believe there are also early preliminary studies that indicate it can help with addictions like alcoholism.

Those drugs really are quite something. Shame they're so damn expensive. Insurance here in Germany is unfortunately legally prohibited from covering GLP-1 class drugs for weight loss unless you have a diabetes diagnosis.


I've been on Mounjaro and find it pretty inexpensive, but I'm using a high-dose pen for a low-dose injection. One 15mg pen is going to last me around 6 months at my current rate, so around 15 euros per week.

You are essentially amortizing a single dose over 6 months by micro-dosing and receiving a clearly sub-therapeutic dose. While that may work for your specific usage, it doesn't change the unit economics for someone who needs the standard therapeutic dose, which in my case at 10 mg is roughly €400 every 4 weeks.

It’s a bit like arguing that a Porsche GT3 RS is an 'affordable' car because the monthly payment is low, provided you finance it over 30 years. The sticker price hasn't changed, you've just engaged in extreme creative accounting to make it fit a monthly budget.


No, it's more like arguing a Miata will fit the same need as a GT3 at 1/20th the cost. Some people don't fit a Miata, but it's still a valid option for many. Though I am at a sub-clinical dose (not a microdose) I'm still losing between 1 and 3 pounds per week. Lots and lots of people optimise their dose and their pen size to make the economics and side effects more manageable. This is not at all unique to me. It's unfortunate if it doesn't fit your case but it is widely applicable nonetheless.

Are the drugs actually expensive, or just expensive for now because they can be?

Modern society basically decided that adding flouride to drinking water and iodine to table salt for everyone was better than dealing with tooth decay and gout.

I understand that peptide synthesis and cold-chain logistics are not as trivial as these elements, but this paper [1] estimates that GLP1 manufacturing costs can be under a dollar per person per month, orders of magnitude less than current market rates!

Perhaps our future society will normalize taking a daily GLP-1 agonist with their other multivitamins at breakfast.

[1]: https://jamanetwork.com/journals/jamanetworkopen/fullarticle...


I suspect a big reason for why Mounjaro is still fairly expensive here in Germany (I pay nearly €400 for a 10mg Qwickpen - a 12.5mg Qwickpen is nearly €500) is due to health insurance not being allowed to cover them for anything but diabetes treatment.

If health insurance companies would be able to cover these drugs, there'd have to be negotiations between Eli Lilly and the insurance companies, and insurance companies have a bigger lever than individual patients who pay out of pocket. Self-payers are just price-takers. We pay whatever Eli Lilly wants us to pay.


China can sell (at a profit) >99.8% pure tirzepatide, semaglutide, and retatrutide for <$3/weekly dose. This supply ends up at compounding pharmacies like Hims/Hers, but sometimes more directly to consumers through gray/black markets.

Another way to check if the marginal cost of production contributes to the cost of the drug is to compare the price of injectable semaglutide (~$1200) for around 10mg/month, to the price of oral semaglutide (Rybelsus), which is also (~$1,000) for around 420mg/month. That indicates that the cost of manufacturing semaglutide does not significantly contribute to the cost of the FDA-approved drug.


They are cheap now if you dig deep enough. Lots of vendors selling peptides.

Can you give some examples? And are those reliable?

Not an example, but maybe this is interesting for folks who haven't really heard of the peptide business before. https://www.theguardian.com/wellness/2026/feb/05/injectable-...

That’s really neat.

I’m ashamed that I have this wish that I were overweight and had an excuse to try a GLP1 just to see how it would affect my impulse control with non-food habits.

I guess there’s not much stopping me from buying some unregulated drugs from the internet and self-experimenting, but I haven’t heard experiences from people deliberately using them for anything but weight.


If it helps, I don't think that there is anything to be ashamed of to want to try new things even if you recognize it is inadvisable. If there were no physical consequences, I'd like to try all sorts of medicines to see what effect they would have on me.

As a user of Mounjaro, obtained from a doctor, I find the experience very interesting. It has all sorts of weird side effects that I don't expect. As a bioinformatician in training it's great fun to speculate about the causes, pathways, signals, and whatnot that might be involved as this drug perturbs so much stuff in my system.

It's not pleasurable per se, but it is interesting. I have changed my food habits significantly without actually trying. I think I was too impatient to eat to cook if that makes sense. Weirdly, foods taste better to me which I did not expect. I also have found myself really enjoying my hobbies more. This has resulted in a lot of 3d printer filament purchases, so my impulse control may not have been helped much.

As far as experimenting on yourself, it will likely require the cumulative effect of weeks or months to notice changes in non-food habits if such changes occur at all.

There's probably some online doctor that will prescribe this thing to you for several hundred dollars/euros or whatever. You may suffer greatly for your curiosity, though. There are instances of very unpleasant side effects, some of which I experience personally.


Depending on your appetite for risk, there's always the gray market. It's also a lot cheaper depending on what your insurance covers. I think I picked up a year's supply of semaglutide for under $200. I've been on some form of GLP for the last 2 years and for me there have been several tangible benefits related to ADHD.

https://gray.guide is a good starting point.


I'll note that calling it the gray market really is people uncomfortable with the idea of buying drugs from a drug dealer trying to find a way to make this more palatable.

That's not a judgment thing on my part - I've got a freezer full of Chinese peptides, among other things.

But the raw API on all of this stuff is coming from China in a way that is effectively unregulated and with no recourse if anything goes wrong. Underground Chinese labs get raided and shut down (usually because they're also involved in producing AAS or opioid precursors) often leaving millions of dollars in unfulfilled product. People get peptides with 0 active ingredient. People get peptides contaminated with disinfectant and have adverse reactions. People get mislabeled peptides. People get radically underdosed or overdoses peptides. And when a controversy hits, these labs close up shop one day and come back a week later under a new name. If you get a vial full of something truly harmful to you and you die, your loved ones have zero recourse.

Your local weed dealer has infinitely more accountability than these labs.

Testing isn't a panacea - people do endotoxin, heavy metal, HPLC, etc., but GCMS and similar basically never happens - and without knowing what the potential substances are the automatching to peaks even for GCMS is often inaccurate to the point of uselessness."Purity" reports on HPLC don't measure everything in the vial - just how pure the targeted peak is. It'll catch protein depredations, but it wouldn't tell you if there was a bunch of anthrax in the vial.

For me, the calculus still makes sense. I've got access to things that have worked incredibly well for me that are not yet available in the US, or in some cases, not likely to ever be. But the "gray" market is buying from overseas drug dealers that don't particularly give a fuck about you. They don't want to hurt you - you spend less money if they do - but they also aren't going out of their way to look after you. Most of them only started HPLC tests because the bodybuilding community demanded it, and these guys were selling AAS and HGH to them before they got into the GLP-1s, and then it became the standard.

These aren't parallel import goods getting sold in areas where they aren't supposed to or unauthorized retailers. These are drug dealers that get shut down by the Chinese government on a regular basis. Go look up QSC, SSA, SRY - and those are just some of the biggest names from the past year or so.


I like to call it the "gray" market because the substances themselves are gray, not because of their source. My weed dealer doesn't sell GLPs (yet, but I can see that coming). I haven't seen anyone arrested for having GLPs yet either-- although I have seen plenty of US based vendors have to close up shop due to legal pressure.

I do that believe that risk can be (mostly) mitigated, mainly by sticking with longstanding vendors and by trying to minimize risk with the actual substances (researching proper dosing protocol, batch testing, not assuming dosing, starting out on lower dosing with new kits etc). There is definitely risk associated, that said I'm often dabbling in non-FDA approved substances, so regardless I have zero recourse if something happens.

Are there any testing/safety protocols that you follow?


> I haven't seen anyone arrested for having GLPs yet either-- although I have seen plenty of US based vendors have to close up shop due to legal pressure.

If you google "med spa arrest glp-1" you can find a good amount of occurrences, e.g. https://www.wsaz.com/2025/09/19/woman-arrested-selling-black...

> I do that believe that risk can be (mostly) mitigated, mainly by sticking with longstanding vendors and by trying to minimize risk with the actual substances (researching proper dosing protocol, batch testing, not assuming dosing, starting out on lower dosing with new kits etc)

Bunch of longstanding vendors have had issues. SRY was one of the biggest names for direct-from-China, shipped peptides contaminated by disinfectant, caused severe reactions for some people. Nexaph is one of the biggest names now in the US, has tons of testing, etc., but got a batch a while back from whoever their manu is in China that had some unknown excipient that got played off as a "test formulation," etc.

Batch testing helps, but it requires the original lab to have actually adhered to the batches in a way that others can track, which isn't always the case. Sometimes top colors span multiple batches, tests on the vendor spreadsheet don't necessarily correspond to the batch being sold if you're trusting their testing, etc.

> Are there any testing/safety protocols that you follow?

Not much. I use a 22um PES filter into a cartridge and inject from it for a few weeks and call it a day. I don't even bother with my own testing at jano unless I have reason to believe something is off and need to confirm.

But I never got my LSD or DMT or anything tested before either so my risk tolerance is basically "eh, send it." I just can't in good conscience recommend people follow that same risk tolerance (though I won't begrudge adults the right to make informed decisions to inject basically anything they want into themselves, either.)


It would probably be interesting, but if you are not overweight, the appetite suppression will likely make this not a very healthy or very fun experiment. I started at 5mg in October, and even on that smaller dose I had to force myself to eat even just ~800kcal a day - especially in the early weeks. When you have a lot of weight to lose, that's a pretty welcome effect. When you are already at a healthy weight, not so much. That caloric intake would put most adults into a pretty deep caloric deficit.

I'd suspect if the effects on non-weight indications check out in studies, we might see drugs that could specifically target those effects without also slowing down your digestive tract. Addictions like nicotine and alcoholism and their consequences cost health insurance companies (and us as a society) billions of Euros/Dollars each year, so there'd be a strong incentive to pursue this.


On Wegovy(semaglutide) I haven't noticed any change in my binges or impulsiveness. Slightly worse(not dramatic) depressive episodes but that's about it.

I may well have done more hobby-related shopping 'binges'/impulse buying in place of eating/drinking binges while on Mounjaro.

But that wasn't such a bad thing - it was mostly due to feeling a bit more awake/alive in the evenings compared to when I'd be drinking or overeating.


That would go in line with what I've read of it helping people with mild addictions.

You could, but Claude Code's memory system works well for specialized tasks like coding - not so much for a general-purpose assistant. It stores everything in flat markdown files, which means you're pulling in the full file regardless of relevance. That costs tokens and dilutes the context the model actually needs.

An embedding-based memory system (letta, mem0, or a self-built PostgreSQL + pgvector setup) lets you retrieve selectively and only grab what's relevant to the current query. Much better fit for anything beyond a narrow use case. Your assistant doesn't need to know your location and address when you're asking it to look up whether sharks are indeed older than trees, but it probably should know where you live when you ask it about the weather, or good Thai restaurants near you.


Disclaimer: Haven't used any of these (was going to try OpenClaw but found too many issues). I think the biggest value-add is agency. Chat interfaces like Claude/ChatGPT are reactive, but agents can be proactive. They don't need to wait for you to initiate a conversation.

What I've always wanted: a morning briefing that pulls in my calendar (CalDAV), open Todoist items, weather, and relevant news. The first three are trivial API work. The news part is where it gets interesting and more difficult - RSS feeds and news APIs are firehoses. But an LLM that knows your interests could actually filter effectively. E.g., I want tech news but don't care about Android (iPhone user) or MacOS (Linux user). That kind of nuanced filtering is hard to express as traditional rules but trivial for an LLM.


But can't you do the same using appropriate MCP servers with any of the LLM providers? Even just a generic browser MCP is probably enough to do most of these things. And ChatGPT has Tasks that are also proactive/scheduled. Not sure if Claude has something similar.

If all you want to do is schedule a task there are much easier solutions, like a few lines of python, instead of installing something so heavy in a vm that comes with a whole bunch of security nightmares?


> But can't you do the same just using appropriate MCP servers with any of the LLM providers?

Yeah, absolutely. And that was going to be my approach for a personal AI assistant side project. No need to reinvent the wheel writing a Todoist integration when MCPs exist.

The difference is where it runs. ChatGPT Tasks and MCP through the Claude/OpenAI web interfaces run on their infrastructure, which means no access to your local network — your Home Assistant instance, your NAS, your printer. A self-hosted agent on a mac mini or your old laptop can talk to all of that.

But I think the big value-add here might be "disposable automation". You could set up a Home Assistant automation to check the weather and notify you when rain is coming because you're drying clothes on the clothesline outside. That's 5 minutes of config for something you might need once. Telling your AI assistant "hey, I've got laundry on the line. Let me know if rain's coming and remind me to grab the clothes before it gets dark" takes 10 seconds and you never think about it again. The agent has access to weather forecasts, maybe even your smart home weather station in Home Assistant, and it can create a sub-agent, which polls those once every x minutes and pings your phone when it needs to.


But if you run e.g. Claude/Codex/opencode/etc locally you also have access to your local machine and network? What is the difference?

OpenClaw allow the LLM to make their own schedule, spawn subagents, and make their own tool.

Yes, basically just some "appropriate MCP servers" can do. but OpenClaw sell it as a whole preconfigured package.


I have a few cron jobs that basically are `opencode run` with a context file and it works very well.

At some point OpenClaw will take over in terms of it's benefits but it doesn't feel close yet for the simplicity of just run the job every so often and have OpenCode decide what it needs to do.

Currently it shoots me a notification if my trip to work is likely to be delayed. Could I do it manually well sure.


But this could be done for 1/100 the cost by only delegating the news-filtering part to an LLM API. No reason not to have an LLM write you the code, too! But putting it in front of task scheduling and API fetching — turning those from simple, consistent tasks to expensive, nondeterministic ones — just makes no sense.

Like I said, the first examples are fairly trivial, and you absolutely don't need an LLM for those. A good agent architecture lets the LLM orchestrate but the actual API calls are deterministic (through tool use / MCPs).

My point was specifically about the news filtering part, which was something I had tried in the past but never managed to solve to my satisfaction.

The agent's job in the end for a morning briefing would be:

  - grab weather, calendar, Todoist data using APIs or MCP  
  - grab news from select sources via RSS or similar, then filter relevant news based on my interests and things it has learned about me  
  - synthesize the information above
The steps that explicitly require an LLM are the last two. The value is in the personalization through memory and my feedback but also the ability for the LLM to synthesize the information - not just regurgitate it. Here's what I mean: I have a task to mow the lawn on my Todoist scheduled for today, but the weather forecast says it's going to be a bit windy and rain all day. At the end of the briefing, the assistant can proactively offer to move the Todoist task to tomorrow when it will be nicer outside because it knows the forecast. Or it might offer to move it to the day after tomorrow, because it also knows I have to attend my nephew's birthday party tomorrow.

That’s ChatGPT Pulse

Not saying the frontier models aren't smarter than the ones I can run on my two 4090s (they absolutely are) but I feel like you're exaggerating the security implications a bit.

We've seen some absolutely glaring security issues with vibe-coded apps / websites that did use Claude (most recently Moltbook).

No matter whether you're vibe coding with frontier models or local ones, you simply cannot rely on the model knowing what it is doing. Frankly, if you rely on the model's alignment training for writing secure authentication flows, you are doing it wrong. Claude Opus or Qwen3 Coder Next isn't responsible if you ship insecure code - you are.


You're right, and the Moltbook example actually supports the broader point - even Claude Opus with all its alignment training produced insecure code that shipped. The model fallback just widens the gap.

I agree nobody should rely on model alignment for security. My argument isn't "Claude is secure and local models aren't" - it's that the gap between what the model produces and what a human reviews narrows when the model at least flags obvious issues. Worse model = more surface area for things to slip through unreviewed.

But your core point stands: the responsibility is on you regardless of what model you use. The toolchain around the model matters more than the model itself.


IMO the value and differentiating factor is basically just the ability to organize them cleanly with accompanying scripts and references, which are only loaded on demand. But a skill just by itself (without scripts or references) is essentially just a slash command with metadata.

Another value add is that theoretically agents should trigger skills automatically based on context and their current task. In practice, at least in my experience, that is not happening reliably.


I don't see how that makes it uniquely viable in France. Germany has something very much like this too. And we've had it for nearly 13 years.

> Since 31 August 2013 companies which operate public petrol stations or have the power to set their prices are required to report price changes for the most commonly used types of fuel, i.e. Super E5, Super E10 and Diesel “in real time” to the Market Transparency Unit for Fuels. This then passes on the incoming price data to consumer information service providers, which in turn pass it on to the consumer.

As a consumer, there is no direct API by the MTS-K that you can use, but there are some services like Tankerkoenig which pass this data on to you. I have used their API in Home Assistant before I switched to an EV.

https://www.bundeskartellamt.de/EN/Tasks/markettransparencyu...


It's been a few days, but when I tried it, it just completely bricked itself because it tried to install a plugin (matrix) even though that was already installed. That wasn't some esoteric config or anything. It bricked itself right in the onboarding process.

When I investigated the issue, I found a bunch of hardcoded developer paths and a handful of other issues and decided I'm good, actually.

    sre@cypress:~$ grep -r "/Users/steipete" ~/.nvm/versions/node/v24.13.0/lib/node_modules/openclaw/ | wc -l
    144
And bonus points:

    sre@cypress:~$ grep -Fr "workspace:*" ~/.nvm/versions/node/v24.13.0/lib/node_modules/openclaw/ | wc -l
    41
Nice build/release process.

I really don't understand how anyone just hands this vibe coded mess API keys and access to personal files and accounts.


From my experience: TDD helps here - write (or have AI write) tests first, review them as the spec, then let it implement.

But when I use Claude code, I also supervise it somewhat closely. I don't let it go wild, and if it starts to make changes to existing tests it better have a damn good reason or it gets the hose again.

The failure mode here is letting the AI manage both the implementation and the testing. May as well ask high schoolers to grade their own exams. Everyone got an A+, how surprising!


> TDD helps here - write (or have AI write) tests first, review them as the spec

I agree, although I think the problem usually comes in writing the spec in the first place. If you can write detailed enough specs the agent will usually give you exactly what you asked for. If you're spec is vague, it's hard to eyeball if the tests or even the implementation of the tests matches what you're looking for.


I think point a) is actually backwards and potentially counterproductive to the petition's stated goals.

The petition explicitly highlights maintainer burnout and the "unausgewogene Verantwortungslast" (unbalanced responsibility burden) as core problems. Excluding project owners/maintainers from recognition would exclude precisely the people carrying the heaviest load – the ones triaging issues at 2am, reviewing PRs, making architectural decisions, and bearing the psychological weight of knowing critical infrastructure depends on their continued engagement.

The XZ Utils incident is instructive here: the attack vector was specifically a burned-out solo maintainer who was socially engineered because he was overwhelmed and desperate for help. If anything, recognition and support structures should prioritize these individuals, not exclude them. Your concern about "pet projects with no impact" is valid, but the solution isn't to exclude owners categorically – it's to define impact criteria. A threshold based on adoption metrics, dependency chains, or inclusion in public infrastructure would filter out portfolio projects without penalizing the people doing the most critical work.

Point c) also seems problematic for similar reasons: much of maintainer work isn't "merged contributions" – it's code review, issue triage, documentation, community management, security response. Under your criteria, the person who reviews and merges 500 PRs per year while writing none themselves would receive no recognition.

The petition is trying to address a structural problem where society extracts massive value from unpaid labor while providing no support structures. Excluding the most burdened participants seems like it would perpetuate rather than solve that problem.


I think limiting the recognition to repos that reach some level of significance would solve a lot of the problems.

It would anger the smaller projects and fresh projects, but it’s the only way to avoid having people create hobby projects or portfolio-filling slop repos and try to claim it as civic service.

This reminds me of a trend a few years ago when I started seeing a lot of applications from people who listed themselves as founders of a charitable foundation on their resume. I felt impressed the first time I saw it but got suspicious after the 3rd or 4th. Then I realized that it doesn’t take much work to incorporate a charitable foundation and list your family and friends as board members. The hard work was actually raising and disbursing money. When I started asking for details about how much the organization did I got wishy-washy answers and a lot of changing the subject. This is why details matter and it’s not as simple as giving everyone who claims an achievement the same reward, however small the reward may be.


Strange framing, isn't it?

Bariatric surgery shows 25-65% significant regain rates depending on definition and timeframe. And regular dieting is even worse. Nobody would frame that as a safety issue. That's... just how weight loss works, not a unique GLP-1 problem.

Calling a return of symptoms (obesity) a "safety issue" is like saying insulin has "no safe off-ramp" because diabetics get hyperglycemic when they stop taking it.

Fear gets clicks, I guess.


At some point, somebody at the site changed the title. The old title was "GLP-1 Drugs Improve Heart Health, But Only If You Keep Taking Them."

How do I know that? The URL slug tells the tale.

> Fear gets clicks, I guess

I strongly suspect this is the reason the title was changed.


I know they (Ars Technica) do A/B title tests sometimes from discussion with one of the people who works there.


The original title is so much more informative. It might be so informative that many people didn't feel a need to read the article.


It's also very strange because more than 75% had some level of sustained weight loss after several years.

That's way better than any other weight loss program. Nothing else even comes close.


well, it could also be from a "do no harm" standpoint.

(although looking into it, it seems many oaths never actually say "first do no harm")


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: