Hacker Newsnew | past | comments | ask | show | jobs | submit | thephyber's commentslogin

Those maintainers should be using LLMs to crate their breakup letter with the Issue/PR submitters!

Regard to code maintenance:

I’m actually of the mind it will be easier IF you follow a few rules.

Code maintenance is already a hassle. The solution is to maintain the intent or the original requirements in the code or documentation. With LLMs, that means carrying through any prompts and to ensure there are tests generated which prove that the generated code matches the intent of the tests.

Yes, I get that a million monkeys on typewriters won’t write maintainable code. But the tool they are using makes it remarkably easy to do, if only they learn to use it.


I’m not sure why the downvotes. I think the poster is basically saying the same thing as this YouTube-er. I read “Million monkeys” as referring to LLMs.

https://youtu.be/DNN8cHqRIB8?si=s2VBjZXrP21yziXa


I am friends with a solo maintainer of a major open source project.

He repeatedly complains that at the beginning of some semester, he sees a huge spike of false/unproveable security weakness reports / GutHub issues in the project. He thinks that there is a Chinese university which encourages their students to find and report software vulns as part of their coursework. They don’t seem to verify what they describe is an actual security vuln or that the issue exists in his GitHub repo. He is very diligent and patient and tries to verify the issue is not reproducible, but this costs him valuable time and very scarce attention.

He also struggles because the upstream branch has diverged from what the major Linux distribution systems have forked/pulled. Sometimes the security vulns are the Linux distro package default configurations of his app, not the upstream default configurations.

And also, I’m part of the Kryptos K4 SubReddit. In the past ~6 months, the majority of posts saying “I SOLVED IT!!!1!” Are LLM copypasta (using LLM to try to solve it soup-to-nuts, not to do research, ideate, etc). It got so bad that the SubReddit will ban users on first LLM slop post.

I worry that the fears teachers had of students using AI to submit homework has bled over into all aspects of work.


As a human being I really enjoy knowing things and being challenged to grow.

While crypto style AI hype man can claim Claude is the best thing since sliced bread the output of such systems is brittle and confidently wrong.

We may have to ride out the storm, to continue investing in self learning as big tech cannot truly spend 1.5 trillion on the AI investment in 2025 without a world changing return on revenue, a one billion revenue last year from OpenAI is nothing.


Kryptos K4 seems to me like a potential candidate for AI systems to solve if they're capable of actual innovation. So far I find LLMs to be useful tools if carefully guided, but more like an IDE's refactoring feature on steroids than an actual thinking system.

LLMs know (as in have training data) everything about Kryptos. The first three messages, how they were solved including failed attempts, years of Usenet / forum messages and papers about K4, the official clues, it knows about the World Clock in Berlin, including things published in German, it can certainly write Python scripts that would replicate any viable pen-and-paper technique in milliseconds, and so on.

Yet as far as I know (though I don't actively follow K4 work), LLMs haven't produced any ideas or code useful to solving K4, let alone a solution.


In china medical students are required to publish original papers. Instead they just pay someone to write it for them and pollute the literature.

So much for the curation argument of the price justification of professional journals.

The typical graduation-requirement paper doesn't get published in a professional journal, so I think professional journals do provide significant curation.

Medical? What's the point? I'm happy with 98% of doctors being able to handle known conditions and only the few percent that are really interested to do research.

It makes the university look better if they do a lot of 'research' even if it's fake. There's not a real reason a doctor needs to do research for an MD.

>I worry that the fears teachers had of students using AI to submit homework has bled over into all aspects of work.

As one does in academia, so to the market, because now we have financial incentive. It ain't going to stop.


Tanks for all! /s

The founding fathers denied the right to bare arms to Catholics (and I’d wager lots of other religions), Native Americans, slaves (unless their owners explicitly allowed them), and we inherited English Common Law which limited carrying guns in populated areas.

Until Heller in ~2008, the right to bare arms (as a national right) was widely agreed to mean a collective right (eg. The militias), not an individual right.

We are in a weird place at this moment where the tide turned and lots of jurisprudence is being switched. Also, with ICE / DHS acting as unprofessional as they are, I wouldn’t be surprised to see lots of Dems advocate for more individual gun rights.


> Tanks for all!

"Tanks" as a vehicle aren't regulated whatsoever - their main cannon is a destructive device which carries its own set of regulations, but you can absolutely own a tank (sans main gun) with zero paperwork.

Privateers sunk over 600 British vessels during the Revolution - do you think they needed permits for their cannonry? Or that the Founders somehow didn't know this was happening?

> Until Heller in ~2008, the right to bare arms (as a national right) was widely agreed to mean a collective right (eg. The militias), not an individual right.

Tell me what United States v Miller was about then?

Why do the Federalist papers disagree with everything you are saying, repeatedly?

> we inherited English Common Law which limited carrying guns in populated areas.

Federalist #46:

"Besides the advantage of being armed, which the Americans possess over the people of almost every other nation, the existence of subordinate governments, to which the people are attached, and by which the militia officers are appointed, forms a barrier against the enterprises of ambition, more insurmountable than any which a simple government of any form can admit of. Notwithstanding the military establishments in the several kingdoms of Europe, which are carried as far as the public resources will bear, the governments are afraid to trust the people with arms. And it is not certain, that with this aid alone they would not be able to shake off their yokes. But were the people to possess the additional advantages of local governments chosen by themselves, who could collect the national will and direct the national force, and of officers appointed out of the militia, by these governments, and attached both to them and to the militia, it may be affirmed with the greatest assurance, that the throne of every tyranny in Europe would be speedily overturned in spite of the legions which surround it."

This "collective right" idea is completely bogus and flies in the face of countless historical writings, accounts, etc. The jurisprudence on this issue is long-settled, and who are you to disagree with a majority of Justices of the Supreme Court of the United States?


Department insurance policies are the only thing that seems to scare departments into improving policies and behaviors.

Insurers who threaten to drop departments have immense leverage that city managers, city elected leaders, and voters don’t.

There aren’t a lot of departments who go bankrupt, but the few that do make a crying show of it and they are a great to show departments who flout reform.


Trust in the real world is not immutable. It is constantly re-evaluated. So the Web of Trust concept should do this as well.

Also, there needs to be some significant consequence to people who are bad actors and, transitively, to people who trust bad actors.

The hardest part isn’t figuring out how to cut off the low quality nodes. It’s how to incentivize people to join a network where the consequences are so high that you really won’t want to violate trust. It can’t simply be a free account that only requires an a verifiable email address. It will have to require a significant investment in verifying real world identity, preventing multiple accounts, reducing account hijackings, etc. those are all expensive and high friction.


I really don't want to expand the surveillance state...

Are GPG signing parties part of the “surveillance state”?

It is the exact thing this system needs


I would go even further. I only want to see content created by people who are in a chain of trust with me.

AI slop is so cheap that it has created a blight on content platforms. People will seek out authentic content in many spaces. People will even pay to avoid the mass “deception for profit” industry (eg. Industries where companies bot ratings/reviews to profit and where social media accounts are created purely for rage bait / engagement farming).

But reputation in a WoT network has to be paramount. The invite system needs a “vouch” so there are consequences to you and your upstream vouch if there is a breach of trust (eg. lying, paid promotions, spamming). Consequences need to be far more severe than the marginal profit to be made from these breaches.


I think it absolutely is coming to an end in lots of ways.

Movie/show reviews, product reviews, app/browser extension reviews, programming libraries, etc all get gamed. An entire industry of booting reviews has sprung up from PR companies brigading positive reviews for their clients.

The better AI gets at slop and controlling bots to create slop which is indistinguishable from human content, the less people will trust content on those platforms.

Your trust relationship with your artist almost certainly was based on something other than just contact info. Usually you review a portfolio, a professional profile, and you start with a small project to limit your downside risk. This tentative relationship and phased stages where trust is increased is how human trust relationships have always worked.


> Movie/show reviews, product reviews, app/browser extension reviews, programming libraries, etc all get gamed. An entire industry of booting reviews has sprung up from PR companies brigading positive reviews for their clients.

But for a long time, unrelated to AI. When Amazon was first available here in Spain (don't remember exactly what year, but before LLMs for sure), the amount of fraudulent reviews filling the platform was already noticeable at that point.

That industry you're talking about might have gotten new wings with LLMs, but it wasn't spawned by LLMs, it existed long time before that.

> the less people will trust content on those platforms.

Maybe I'm jarred from using the internet from a young age, but both me and my peers basically has a built-in mistrust against random stuff we see on the internet, at least compared to our parents and our younger peers.

"Don't believe everything you see on the internet" been a mantra almost for as long as the internet has existed, maybe people forgot and needed an reminder, but it was never not true.


LLMs reduce the marginal cost per unit of content.

When snail mail had a cost floor of $0.25 for the price of postage, email was basically free. You might get 2-3 daily pieces of junk mail in your house’s mailbox, but you would get hundreds or thousands in your email inbox. Slop comes at scale. LLMs didn’t invent spam, but they are making it easier to create more variants of it, and possibly ones that convert better than procedurally generated pieces.

There’s a difference between your cognitive brain and your lizard brain. You can tell yourself that mantra, but still occasionally fall prey to spam content. The people who make spam have a financial incentive to abuse the heuristics/signals you use to determine the authenticity of a piece of content in the same way cheap knockoffs of Rolex watches, Cartier jewelry, or Chanel handbags have to make the knockoffs appear as authentic as possible.


>When snail mail had a cost floor of $0.25 for the price of postage

Hence I suspect that quite a few of these interfaces that are now being spammed with AI crap will end up implementing something that represents a fee, a paywall, or a trustwall. That should keep armies of AI slop responses from being worthwhile.

How we do that without killing some communities is yet to be seen.


Trust is not binary — it is a spectrum.

Anyone making a claim that trust will be 0% based on a single thing is obviously oversimplifying the situation. Trust is built on behavior, reputation, time, repeatability, etc.

Trust is subjective and relative. If Alice doesn’t trust Eve, that doesn’t automatically mean that Bob doesn’t trust Eve. That usually requires both Alice and Bob to similar experiences or Bob must have a trust relationship with Alice.


Trust also changes over time. One CEO change and a company can change overnight thus causing all trust to evaporate. Normally CEOs are aware of this and don't change things and so trust transfers, but one mistake and you lose trust. It takes a lot to build back trust, but a few years of proving worthy of trust and it starts to come back. If your competitors violates trust in the mean time customers are more likely to risk you, and if you prove trustworthy the customers are likely to stay.

There are other factors than trust as well - the US government really wants intel fabs to take off and they may be applying pressure that we are not aware of. It could well be that Apple is willing to risk Intel because the US government will buy a lot of macs/iphones but only if they CPU is made in the US. (this would be a smart thing for the US todo for geopolitical reasons)


The end of this assumes the snowball fallacy.

Just because people appear to be doing more isolated things doesn’t mean it’s a ratchet that only moves in 1 direction. People adjust. When enough people feel too lonely, they will adapt and many of them will come up with a solution, likely swinging the pendulum in the other direction.


It is a ratchet though. South Korea's birth rate has been at an existential level for at least a decade and shows no sign of improving. Things won't improve while technology and tech companies are warping what it means to be human.

Now there's chatter about AI companions. If they take off and substitute real relationships it's game over. Swathes of the population will bed rot because they have no incentive to go outside for anything other than work.


I believe it's more likely to lead to political extremism and a positive (rather than negative) feedback loop.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: