Has anyone come across a good semi-technical description of "how your node knows that its message got sent and reached its destination and can stop sending"? I'm interested in how this logic is executed when you can't be sure (I assume) that knowledge of the result will get back to you in a certain amount of time.
In short, all messages are sent within windows, which include waiting periods within which the broadcasting node listens for any other node to rebroadcast the same message. Upon seeing the message be rebroadcast, it stops attempting to send.
I am severely tempted to hop on a flight to go and see it, but wondering if it's such a "once in a lifetime" thing to go see? That, and if it'll peter out by the time I get there, and $500+ to just fly on a whim and stay overnight.
It seems to me an analogy that as a product is increasingly complex, the ultimate consumer/demander of it becomes more and more disconnected from maintenance, operations, etc. considerations and whether that system is well designed and serviceable.
Cars of a past generation were able to be owner-maintained (or understood), and therefore the owner had some interest in knowing that it was easy to maintain and would buy (at least partly) on that premise. Something that was a nightmare to maintain would not be so easily bought because the owners would soon realize how hard they were to fix.
Now, with a car that is so complicated, the owner is far distant from being the fixer of it until years later seeing a surprise repair bill. Even the maintainers are not even directly knowledgeable about the design and how to repair. And the information about its maintainability is a low factor on the buying considerations list. But by then you've already given the company the money and incentive to keep on building this way. And rarely (or extremely/too "laggily" does that information feed back).
It seems to me enterprise software systems have this problem as well.
I wonder how the incident was diagnosed? Does the FDR record low level errors that might've contributed to this? I thought that it only recorded certain input parameters and high-level flight metrics but I'm no expert.
If a radiation event caused some bit-flip, how would you realize that's what triggered an error? Or maybe the FDR does record when certain things go wrong? I'm thinking like, voting errors of the main flight computers?
"Had the same problem with low power CMOS 3 transistor memory cells used in implantable defibrillators in the 1990s. Needed software detection and correction upgrade for implanted devices, and radiation hardening for new devices. Issue was confirmed to be caused by solar radiation by flying devices between Sydney and Buenos Aires over the south pole multiple times, accumulating a statistically significant different error rate to control sample in Sydney."
I was surprised too at the 2nd sentence: "The project will have a heating power of 2MW and a thermal energy storage (TES) capacity of 250MW..."
and how a news outlet about energy could get such a fundamental unit wrong.
But given that later in the article it does revert to correct units (and the numbers are plausibly proportional), I assume it's just a typo. Strange that it hasn't been corrected even now.
"...It follows Polar Night Energy completing and putting a 1MW/100MWh Sand Battery TES project into commercial operations this summer..."
Assuming you aren't looking for an exact millimeter-for-millimeter version of the spiral condenser, I bet you could find someone to make it in the US quite easily.
It's still one of the major glassblowing countries. In fact, if you remember when folks were worried about those two quartz mines in Spruce Pine, NC that are the only place with pure enough quartz for chips? That's also the home of Spruce Pine Batch, one of the big glass suppliers in America.
Hey, I heard about how utility pole inspecting helicopters are able to tell the good/rotten state of wooden telephone poles by the reverb pattern of sound waves coming off the poles from the rotors -- it seems to me the whole field of non-invasive sensing (and using existing/ambient emission sources) is getting pretty impressive.
> Hey, I heard about how utility pole inspecting helicopters are able to tell the good/rotten state of wooden telephone poles by the reverb pattern of sound waves coming off the poles from the rotors
There is a whole field dedicated to this, called non-destructive testing. Modal response (i.e., monitor how a structure vibrates in response to an excitation) is a basic technique that features in multiple areas such as structural health monitoring and service life estimations.
Some mechanics also do this by placing the tip of a screwdriver against a point in an engine and place their ear against the screwdriver's handle. If it's not sounding right, the engine has problems.
Even pottery. You should hear the sound of a pot after you tap it. If it's muffled then odds are it has internal cracks.
In telecommunications construction we are taught to make ample use of the "hammer test" when working on and around poles. The difference in sound between a good pole, a marginal pole and a completely rotten pole is quite significant.
In outdoor rock climbing smacking rocks is an integral part of ensuring the rock you're trusting your life with is in fact worth trusting your life with.
Only if you go outside well-secured sport climbs where you don't have to think about that (but still its a good idea to check the state of bolts for any visible damage due to rot and rust). And even then, some rocks are hollow and still can sustain next 5000 years of literally any climbing on them, some are more solid and will come off if somebody over 80kg hangs on them. So its more about calming one's mind rather than objective good quality test.
Most folks in Europe climb only sport routes, or then do some variant of proper alpinism once on wild unsecured terrain.
Too expensive where I live. Rocks, hills and trees: the natural enemies of buried fibre and wireless networks. One of my competitors took 6 months to bury a cable in granite that would've been a 5 day aerial job.
I'm so glad you say that. Resi aerial is perfect in most locations. No dig, no service boxes in front yards, under someone's unpermitted driveway pour, ample power easily, a guy in a bucket truck is all you need.
Trenchless works well when it can, but even reasonable infrastructure underground is twice as expensive.
I love seeing a neighborhood lit up in fiber in 2-5 days and subscribers online at 1-10Gb in soooo many places. Keeps crews busy either way :D
Downside is: a drunk guy in a truck is all you need to tear it down, not to mention natural disaster influence. And it's unsightly AF.
Yes, it's fast and cheap. That's how we got the situation that a backwater village in the midst of the "anus mundi" of Romania has XGPON for a few dozen euros a month, while you're lucky to get anything above 50M VDSL in Germany outside of large urban areas and 200M VDSL in urban areas.
But holy hell it's an eyesore to be in said village in Romania, look out the window and look at a bunch of fiber strung not even from a proper pole but from a tree. Takes the German expression "Kabelbaum" to a whole new level.
Even if a pole is taken out by a drunk driver that does not mean the cables are going to be severed. I've seen plenty of times when poles had to be replaced, but the communications cables remained undamaged in place due to the strength and tension of the supporting strand.
The bigger issue over the last 5 years in the area where my company operates is the number of dump trucks that leave the bed up. Given the weight of dump truck it is easy for them to pull down multiple poles when they catch the cables, although perhaps they are drunk drivers...
And outdoor DSLAMs are invulnerable, to cars, vandalism, dog-piss, whatever? Ever walked by one in the middle of the night, when its cooling fans hum? Wanna live near that?
Yes. But you've written about ugly and vulnerable infrastructure am "Arsch der Welt"/"JWD" first, and lamented about limited availability and performance of pink Telecomicstan VDSL in Teutonistan second. I've written about the latter, since I've heard them, because they are not passive.
Also where I live (a karst region) other expensive things we deal with are frost lines (frost heave is a real issue; water expands as it freezes, things in the ground don't stay in the ground if ice is expanding into their space) and limestone rock underfoot (sinkholes are a real issue; dig wrong or too deep or not carefully enough and cave in the ground right under you, or worse, someone's house right next to you).
Google Fiber wrecked entire city streets relearning these things the dumbest way possible (then left the street repair bills to the us the taxpayers, because of course they did).
Come on. This is Hacker News: criticism is fine, but make it constructive. No need to make such a dismissive comment. The other replies were also critical of it, but they all gave reasons. Please be better next time, and I hope you have an otherwise great day, friend!
When I lived in the countryside on a bit of land and needed to get fiber from the road to my house on my own dime, burying the line was 5-10x as expensive as suspending it.
And look where that got Germany;
my hometown and neighbouring towns are mostly on ADSL or rarely VDSL if you’re lucky, because the big players don’t want anything to do with the cost and legal side.
Local municipalities establish de-facto monopolies and drive prices up, because they offer slightly faster and stable lines.
There is a joint effort by local utility companies in Mecklenburg and they’re trying to make things better, but anecdotally are also challenging to deal with.
My now residence here in the UK is not really rural and for years Giganet/CityFibre/toob promised gigabit soonTM for years and the date got delayed and delayed and delayed.
At least here in Denmark, they seem to have opted for installing bigger "pipes", instead of just laying down some fiber cables. Then in the future they can just push new cables through the pipes. An idea I bet they wish they had gotten the first time around.
That is not the reason that got Germany to have poor telecom infrastructure. We also have poor 4G/5G coverage without the need of any FTTH setups.
There is a common case of excessive bureaucracy and extremely conservative population (thank you, low birth rates) which is hindering any significant development in the country.
Yeah the reason ain't so much some cables in the ground but general byzantine bureaucratic obscurity of a state that you germans created (or allowed to be created) and maintain for yourself. Its far from the only issue stemming form it, and all are just symptoms of underlying dysfunctionalities. Also the population seems to mostly sit around waiting for politicians to fix all problems.
The GDR was deploying fiber, but the west is using capitalism as underlying mechanism so the fiber was left unused and even replaced by copper after reunification because why use the latest technology just yet when you can get people to pay both for the downgrade and the upgrade some decades later!
There was fiber deployment in the GDR and plans to extend it already before the OPAL project, which came after reunification. I remember our East German CS network professor talking about it with passion but fail to find information online. Which doesn’t surprise me, since history is written by the winners. I trust his personal stories more than the lack of information online.
It could be. Even much stuff from the 'winners' from before common internet access is lost ;->
OTOH, considering how well the 'megabit-chip' went, I'm wondering wtf they'd do with fiber, at the times? For the military, agencies, ministries and some universities maybe, but for the masses? How common was the 'stinknormales telephon' in households, back then?
I can’t tell if this ever became a reality; I know of more modern approaches attempting to use thermal and multi spectral imaging to achieve the same goal.
I live near a helicopter factory and when the spinning towers are in use, you hear all sorts of auditory patterns as you move around the town. When they are test flying - similar and the Police have one and there is an air ambulance too. My Dad's other staff car in the '80s was a Gazelle and in the '70s he whizzed around in a Sioux. I've seen and heard a lot of helos!
I have absolutely no doubt that with some funky signal processing you can do all sorts of things.
Its much less disturbing than living under a flight path to an airport. I actually like the sound of a helo - its variable and interesting, as is a piston powered fixed wing aircraft.
Mind you I also lived near RAF, USAAF and Luftwaffe bases back in the day and several flights of Phantoms, Starfighters, Jaguars and Tornadoes and the rest can make quite a din. Phantoms were pretty huge engined beasts with minimal effort made for noise reduction. A "finger four" lighting up their after burners to gain altitude really fast is ear splitting.
HV transmission line inspection routinely has the linesmen crawl out of helicopters onto the lines and back. Granted, as far as I know its the highest skill and most difficult helicopter job.
I've seen that in person while in Canada and it is most impressive. The moment they discharge the differential between the helicopter and the line is just awesome. The firebreak clearing operations are also something to behold. From a very safe distance.
That makes sense. It's probably less "doing crazy convolution calculations on how sampled ambient noise changes as the helicopter gets close to a pole", and more "rotten wood vibrates slower"
A typical CT scan delivers enough radiation to give a healthy person a 1/500 chance of getting a cancer in their lifetime that they otherwise would not have gotten. The risk is higher for children.
We have people working around low-flying aircraft all the time. I’m guessing the associated job risks are better.
When you take those jobs, it’s because you want to make money, not because your life is at risk, there’s information asymmetry between you and the medical provider who is indirectly rewarded for billing for scans, and the overarching medical system prioritized CT scans over MRIs while our engineering culture failed to establish something safer and cheaper.
Would you play Russian Roulette with a revolver with 500 chambers and 1 bullet? What if by doing so a hospital would receive thousands of dollars, and would go on to be paid many more thousands of dollars if you got unlucky?
The cost-benefit trade-off is there, and the powers that be are prioritizing cancer.
Fascinating---I appreciate you raising awareness. This information was a big update for me, so I looked for a source and found roughly the same numbers (though my numbers were 1/1000, possibly because newer CT exams seem to be slightly safer). From [1]:
> ...93 million CT examinations performed ... projected to result in approximately 103 000 future cancers ... cancer risk was higher in children ... CT-associated cancers could eventually account for 5% of all new cancer diagnoses annually.
Although keep in mind that these numbers do need context. cancer != death. That ranges from cold comfort (in the case painful chemo treatment & years of fear) to a critical factor (based on how the USA diagnoses it, approximately 6% of men will have prostate cancer that does not require treatment).
Based only on these numbers above and my prior beliefs, I would say that that either
A) CT scans are a necessary evil that haven't been adequately replaced
or
B) These numbers less problematic than one might expect, due to some quirk of the data
I generally trust the USA's medical establishment on new treatment, though I've heard that they're slow to clamp down on outdated treatments.
I appreciate you looking into the numbers to verify. The 1/1000 odds seem better, though still important.
Also, framed another way, 5% of cancer cases caused by CT scans would mean that 1 in 20 people in the cancer ward were placed there by a CT scan. Or alternatively, phasing out CT scans would prevent 1 in 20 cancer cases, with prevention being worth more than a cure for every 1 cancer patient in 20.
Do you have any other wavelenghts of radiation that pass through flesh but not bone and metal we can use instead? If so speak up please, otherwise we need to keep using x rays because, physics.
Yeah but that's different. It's great for soft tissue (that has water which can be vibrated by the magnetic field) buy less great for things like bone. Hence why CTs are still used. Also, the magnetic field makes it so things like intraoperative imaging is very difficult.
When bone is what needs attention, you can use conventional x-rays in all but specialist cases. A single x-ray image is typically far less radiation than a whole CT scan.
And yes, you can still see bone in an MRI. A related question is, how well can you see soft tissue in a CT scan?
CT scans are routinely used to diagnose soft tissue problems, where they are the wrong tool for the job: an MRI would be more ideal. CT scans in these situations expose the patient to avoidable cancer risks while compromising the level of insight provided to the medical provider.
Interoperative imaging is another specialist use case. The need for CT scans in specialist situations speaks to the failure to develop alternatives with lower cancer risks. Also, the need to use a CT scan in certain situations does not mean that the CT scans should be used in other situations.
By the way, a pet peeve of mine right now is that reporters covering court cases (and we have so many of public interest lately) never seem to simply paste the link to the online PDF decision/ruling for us all to read, right in the story. (and another user here kindly did that for us below: https://storage.courtlistener.com/recap/gov.uscourts.dcd.223... )
It seems such a simple step (they must have been using the ruling PDF to write the story) yet why is it always such a hassle for them to feel that they should link the original content? I would rather be able to see the probably dozens of pages ruling with the full details rather than hear it secondhand from a reporter at this point. It feels like they want to be the gatekeepers of information, and poor ones at that.
I think it should be adopted as standard journalistic practice in fact -- reporting on court rulings must come with the PDF.
Aside from that, it will be interesting to see on what grounds the judge decided that this particular data sharing remedy was the solution. Can anyone now simply claim they're a competitor and get access to Google's tons of data?
I am not too familiar with antitrust precedent, but to what extent does the judge rule on how specific the data sharing need to be (what types of data, for what time span, how anonymized, etc. etc.) or appoint a special master? Why is that up to the judge versus the FTC or whoever to propose?
> By the way, a pet peeve of mine right now is that reporters covering court cases never seem to simply paste the link to the online PDF decision/ruling for us all to read right in the story.
I presume that this falls under the same consideration as direct links to science papers in articles that are covering those releases. Far as I can tell, the central tactic for lowering bounce rate and increasing 'engagement' is to link out sparsely, and, ideally, not at all.
I write articles on new research papers, and always provide a direct link to the PDF,; but nearly all major sites fail to do this, even when the paper turns out to be at Arxiv, or otherwise directly available (instead of having been an exclusive preview offered to the publication by the researchers, as often happens at more prominent publications such as Ars and The Register).
In regard to the few publishers that do provide legal PDFs in articles, the solution I see most often is that the publication hosts the PDF itself, keeping the reader in their ecosystem. However, since external PDFs can get revised and taken down, this could also be a countermeasure against that.
They didn't cited papers directly even before the web. It's not a bounce or engagement issue.
Journalists don't make it easy for you to access primary sources because of a mentality and culture issue. They see themselves as gatekeepers of information and convince themselves that readers can't handle the raw material. From their perspective, making it easy to read primary sources is pure downside:
• Most readers don't care/have time.
• Of the tiny number who do, the chances of them finding a mistake in your reporting or in the primary source is high.
• It makes it easier to mis-represent the source to bolster the story.
Eliminating links to sources is pure win: people care a lot about mistakes but not about finding them, so raising the bar for the few who do is ideal.
That’s not how it works at large news orgs. Journalists will enter their articles in a CMS. From there it will get put into a workflow and seen by one or several editors who will edit things for clarity, grammar, style, etc. Links will get added at some point by an editor or automated system. There is no cabal of journalists scheming to keep links out of articles because “culture”.
If there were a culture of always including the original source, or journalists massively advocating to include the original source, then surely the CMS would cater to it. I think it's safe to draw the conclusion that most journalists don't care about it.
There’s a lot of rightly deserved criticism of the media but the OP describing journalists as conspiring to keep links out in the fear of being fact checked by readers is simply false and indicative of not having any experience at a large news organization.
It's not historically done. Printed newspapers obviously didn't have links and neither did televised news. Even when the news media started publishing online, it's not like the courts were quick to post the decisions online.
And there's also the idea that you should be able to at least somewhat trust the people reporting the news so they don't have to provide all of their references. --You can certainly argue that not all reporters can or should be trusted anymore, but convincing all journalists to change how they work because of bad ones is always going to be hard.
There is also the added pressure that some organizations quietly pile on editors to keep people from clicking out to third parties at all, where their attention may wander away. Unless of course that third party is an ad destination.
Reputable news organizations are more robust against such pressures, but plenty of people get their news from (in some cases self-described) entertainment sites masquerading as news sites.
Is it because journalists think of their special talent as talking to people to get information (which is a scarce and priviliged resource), versus reading and summarizing things that we all have access to?
So they rarely are forced to do anything but state the name of who they interviewed, and that's it. And puts them in the habit of not acknowledging what they read, as a source?
It seems more malicious or intentional than only trying to gate keep. More often then not when I dig deeper into a news article referencing a scientific paper or study, the details they pull out are very much out of context and don't tell the same story as the research.
I have to assume the journalist writing such an article knows that they are misrepresenting the research to make a broader point they want to make.
Articles about patent infringement are similarly annoying when the patent numbers aren't cited. This is basic 21st century journalism 101. We aren't limited to what fits on a broadside anymore.
We need an AI driven extension that will insert the links. This would be a nice addition to Kagi as they could be trusted to not play SEO shenanigans.
If news on the web was journalism instead of attention seeking for ad revenue you’d be right.
Agree on the extension idea, except I’m not sure I want to see the original sensationalized content anyway. Might as well have the bot rewrite the piece in a dry style.
> If news on the web was journalism instead of attention seeking for ad revenue you’d be right.
That’s painting with an overly broad brush. Nevertheless, the news was relying on ads long before most people knew the word “Internet”, but there were far fewer channels to place ads back then, so in some respects news and media organizations had a captive audience in advertisers.
Mass adoption of television was effectively made possible because of advertising money.
I don't read science/tech articles from major news outlets for this reason. They NEVER link to the papers and I always have to spend a few minutes searching for it.
This doesn't happen nearly as often on smaller sci/tech news outlets. When it does a quick email usually gets the link put in the article within a few hours.
It's depressing how much of the web didn't work the way it was supposed to. Attention is centralized on news websites because news can be posted on social media feeds every day. Those news articles never link to other websites due to arbitrary SEO considerations. Google's pagerank which was once based on backlinks can't function if the only links come from social media feeds in 3 websites and none of them come from actual websites. On top of it all, nobody even knows for sure if those SEO considerations matter or not because it's all on Google's whim and can change without notice.
The web works fine it's just PACER and stuff that is garbage because there is no competition in the trash people create for the government and public apathy (or corruption, take your pick) is high.
I think one of the lessons of Wikipedia, is the more you link out the more they come back.
People come to your site because it is useful. They are perfectly capable of leaving by themselves. They don't need a link to do so. Having links to relavent information that attracts readers back is well worth the cost of people following links out of your site.
Interesting example, as Google used to link to Wikipedia much more prominently, then stopped doing that, which dropped Wikipedia's visitor counts a lot. A very large percentage of Wikipedia's visits are Google referrals.
Google shifted views that used to go to Wikipedia first to their in-house knowledge graph (high percentages of which are just Wikipedia content), then to the AI produced snippets.
All to say, yes...Wikipedia's generosity with outbound links is part of the popularity. But they still get hit by this "engagement" mentality from their traffic sources.
I would argue that this is less an example of why linking out may be bad for engagement and more an example of google abusing its intermediary/market position to keep users on their own pages longer
I'd argue that a user not having to click through is clearly a better result for the user, and that alone would be sufficient motivation to do it.
In terms of a single search, I don't think Google really benefits from preventing a click-through - the journey is over once the user has their information. If anything, making them click through to an ad-infested page would probably get a few fractions of a cent extra given how deeply Google is embedded in the ads ecosystem.
But giving the user the result faster means they're more likely to come back when they need the next piece of information, and give them more time to search for the next information. That benefits Google, but only because it benefits the user.
That'd be all fine if google produced that content, but since it doesn't, once they kill off the website, what happens to the quality of their snippets? Then the user has only shitty snippets that are out of date.
That's the kind if short-sighted view that's the root issue in a ton of enshittification happening around: the belief that short-term gains or benefits are all it's about. It's not sustainable to leech off of wikipedia content to fuel your own (ad in Google's) knowledge pop-ups, even if it benefits the user in that they save a single click, because that means long-term wikipedia will die out because users no longer associate the knowledge gained with wikipedia but with Google even though they had nothing to do with it apart from "stealing it".
In my niche these links are all going to indian scam sites for years and years. Right now can google "taehyung" one of the largest kpop idols and see it live. Count the indian links that have been dominating without any particular expertise thanks to google's changes (indian scammer site kickbacks etc)
I won't call it dead, but it is declining. Their various sources of traffic are now regurgitating Wikipedia Content (and other 3rd party sources) via uncited/unlinked AI "blurbs"...instead of presenting snippets of Wikipedia contents with links to Wikipedia to read more.
It's not the only reason their traffic is declining, but it seems like a big one.
I may be wrong, but I don’t think the people that edit Wikipedia are the same people that are content with half truths from LLMs and thus no longer visiting the site. So I kinda doubt it matters much.
Also, Stack Overflow is a commercial website, while Wikipedia is a free (as in freedom) project. Editing Wikipedia feels like you're contributing towards "an ideal", that you're giving back something to humanity, instead of just helping somebody else getting richer.
When it comes to providing direct links to PDFs of scientific papers, you can often run into paywall issues. Court decisions / rulings on the other hand do not belong to any publishers, so it's a different story
Never link outside your domain has been rule #1 of the ad-driven business for years now.
Once users leave your page, they become exponentially less likely to load more ad-ridden pages from your website.
Ironically this is also why there is so much existential fear about AI in the media. LLMs will do to them what they do to primary sources (and more likely just cut them out of the loop). This Google story will get a lot of clicks. But it is easy to see a near future where an AI agent just retrieves and summarizes the case for you. And does a much better job too.
> But it is easy to see a near future where an AI agent just retrieves and summarizes the case for you. And does a much better job too.
I am significantly less confident that an LLM is going to be any good at putting a raw source like a court ruling PDF into context and adequately explain to readers why - and what details - of the decision matter, and what impact they will have. They can probably do an OK job summarizing the document, but not much more.
I do agree that given current trends there is going to be significant impact to journalism, and I don’t like that future at all. Particularly because we won’t just have less good reporting, but we won’t have any investigative journalism, which is funded by the ads from relatively cheap “reporting only” stories. There’s a reason we call the press the fourth estate, and we will be much poorer without them.
There’s an argument to be made that the press has recently put themselves into this position and hasn’t done a great job, but I still think it’s going to be a rather great loss.
> significantly less confident that an LLM is going to be any good at putting a raw source like a court ruling PDF into context and adequately explain to readers why
If you think that's the case, you should really give current LLMs another shot. The version of ChatGPT from 3 years ago has more in common with the average chatbot from 50 years ago than it does the ChatGPT from today.
What condescending nonsense is this? I use all the major LLM systems, mostly with their most expensive models, and when I ask them for sources, including specifically in many cases sources for legal questions, half the time the linked source will not be remotely irrelevant, and will not remotely substantiate the claim that it is being cited for. Almost never is it without an error of some significance. They all still hallucinate very consistently if you’re actually pushing them into areas that are complicated and non-obvious, when they can’t figure out an answer, they make one up. The reduction in apparent hallucinations in a recent models seems to be more that they’ve learned specific cases where they should say they don’t know, not that the problem has been solved in a broader sense.
This is true for first party applications, as well as for custom integrations, where I can explicitly check that the context should be grounding them with all of the relevant facts. It doesn’t matter, that isn’t enough, you can tell me I’m holding it wrong, but we’ve consulted with experts from anthropic and from OpenAI and who have done major AI integrations at some of the most prominent AI consuming companies. I’m not holding it wrong. It’s just a horribly flawed piece of technology that must be used with extreme thoughtfulness if you want to do anything non-trivial without massive risks.
I remain convinced that the people who can’t see the massive flaws in current LLM systems must be negligently incompetent in how they perform their jobs. I use LLM’s every day in my work and they are a great help to my productivity, but learning to use them effectively is all about understanding the countless ways in which they fail and thinks that they cannot be relied on for and understanding where they actually provide value.
They do provide value for me in legal research, because sometimes they point me in the direction of caselaw or legal considerations that hadn’t occurred to me. But the majority of the time, the vast majority, their summaries are incorrect, and their arguments are invalid.
LLMs are not capable of reasoning which requires non-obvious jumps of logic which are more than one small step removed from the example that they’ve seen in their training. If you attempt to use them to reason about a legal situation, you will immediately see themselves tie themselves in not because they are not capable of that kind of reasoning, on top of their inability to actually understand in summarize case documents and statutes accurately.
There's a simpler explanation: they are comparing LLM performance to that of regular humans, not perfection.
Where do you think LLMs learned this behavior from? Go spend time in the academic literature outside of computer science and you will find an endless sea of material with BS citations that don't substantiate the claim being made, entirely made up claims with no evidence, citations of retracted papers, nonsensical numbers etc. And that's when papers take months to write and have numerous coauthors, peer reviewers and editors involved (theoretically).
Now read some newspapers or magazines and it's the same except the citations are gone.
If an LLM can meet that same level of performance in a few seconds, it's objectively impressive unless you compare to a theoretical ideal.
Llms are already incredibly able to be great at contextualizing and explaining things. HNs is so allergic to AI, it's incredible. And leaving you behind
They are. I use LLMs. They need to be given context. Which is easy for things that are already on the Internet for them to pull from. When people stop writing news articles that connect events to one another then LLMs have nothing to pull into their context. They are not capable of connecting two random sources.
Edit: also, the primary point is that if everyone uses LLMs for reporting, the loss of revenue will cause the disappearance of the investigative journalism that funds, which LLMs sure as fuck aren’t going to do.
Is this article investigative? Summary of the court case pdf is trivial for an LLM and most will probably do a better job than the linked article. Main difference being you won't be bombarded with ads and other nonsense (at least for now). Hell I wouldn't be surprised if the reporter had an LLM summarize the case before they wrote the article.
Content that can't be easily made by an LLM will still be worth something. But go to most news sites and their content is mostly summarization of someone else's content. LLMs may make that a hard sell.
I think it's a mix of shortsightedness and straight up denial. A lot of people on here were the smart nerdy kid. They are good at programming or electronics or whatever. It became their identity and they are fuckin scared that the one thing they can do well will be taken away rather than putting the new tool in their toolbox.
The problem I may have with using an LLM for this is that I am not already familiar with the subject in detail and won't know when the thing has:
* Strayed from reality
* Strayed from the document and is freely admixing with other information from its training data without saying so. Done properly, this is a powerful tool for synthesis, and LLMs theoretically are great at it, but done improperly it just muddles things
* Has some kind of bias baked in-ironic mdash-"in summary, this ruling is an example of judicial overreach by activist judges against a tech company which should morally be allowed to do what they want". Not such a problem now, but I think we may see more of this once AI is firmly embedded into every information flow. Currently the AI company game is training people to trust the machine. Once they do, what a resource those people become!
Now, none of those points are unique to LLMs: inaccuracy, misunderstanding, wrong or confused synthesis and especially bias are all common in human journalism. Gell-Mann amnesia and institutional bias and all that.
Perhaps the problem is that I'm not sufficiently mistrustful of the status quo, even though I am already quite suspicious of journalistic analysis. Or maybe it's because AI, though my brain screams "don't trust it, check everything, find the source", remains in the toolbox even when I find problems, whereas for a journalist I'd roll my eyes, call them a hack and leave the website.
Not that it's directly relevant to the immediate utility of AI today, but once AI is everything, or almost everything, then my next worry is what happens when you functionally only have published primary material and AI output to train on. Even without model collapse, what happens when AI journobots inherently don't "pick up the phone", so to speak, to dig up details? For the first year, the media runs almost for free. For the second year, there's no higher level synthesis for the past year to lean on and it all regresses to summarising press releases. Again, there are already many human publications that just repackage PRs, but when that's all there is? This problem isn't limited to journalism, but it's a good example.
"Based on the court's memorandum opinion in the case of United States v. Google LLC, Google is required to adhere to a series of remedies aimed at curbing its monopolistic practices in the search and search advertising markets. These remedies address Google's distribution agreements, data sharing, and advertising practices.
Distribution Agreements
A central component of the remedies focuses on Google's distribution agreements to ensure they are not shutting out competitors:
No Exclusive Contracts Google is barred from entering into or maintaining exclusive contracts for the distribution of Google Search, Chrome, Google Assistant, and the Gemini app.
No Tying Arrangements Google cannot condition the licensing of the Play Store or any other Google application on the preloading or placement of its other products like Search or Chrome.
Revenue Sharing Conditions The company is prohibited from conditioning revenue-sharing payments on the exclusive placement of its applications.
Partner Freedom Distribution partners are now free to simultaneously distribute competing general search engines (GSEs), browsers, or generative AI products.
Contract Duration Agreements with browser developers, OEMs, and wireless carriers for default placement of Google products are limited to a one-year term.
Data Sharing and Syndication
To address the competitive advantages Google gained through its exclusionary conduct, the court has ordered the following:
Search Data Access Google must provide "Qualified Competitors" with access to certain search index and user-interaction data to help them improve their services. This does not, however, include advertising data.
Syndication Services Google is required to offer search and search text ad syndication services to qualified competitors on ordinary commercial terms. This will enable smaller firms to provide high-quality search results and ads while they build out their own capabilities.
Advertising Transparency
To promote greater transparency in the search advertising market, the court has mandated that:
Public Disclosure Google must publicly disclose significant changes to its ad auction processes. This is intended to prevent Google from secretly adjusting its ad auctions to increase prices.
What Google is NOT Required to Do
The court also specified several remedies it would not impose:
No Divestiture Google is not required to sell off its Chrome browser or the Android operating system.
No Payment Ban Google can continue to make payments to distribution partners for the preloading or placement of its products. The court reasoned that a ban could harm these partners and consumers.
No Choice Screens The court will not force Google to present users with choice screens on its products or on Android devices, citing a desire to avoid dictating product design.
No Sharing of Granular Ad Data Google is not required to share detailed, query-level advertising data with advertisers.
A "Technical Committee" will be established to assist in implementing and enforcing the final judgment, which will be in effect for six years."
Frankly I don't think that's bad at all. This is from Gemini 2.5 pro
I guess they are unable to value the function that I am more likely to read and trust stories from their website if they give me the honest info about where their stories come from that I can further read (and rely on them to always point me to as a guide).
they likely, and probably correctly, do not want you as a customer. people who are discerning and conscious like this generally use an adblocker, and even if you don't, are generally less easily influenced by adverts in the first place. most people like this tend towards wealthy, so it's a valuable demographic if they can get past those two issues, but they're not easy to get past
You made me snort with laughter with how right you were. I in fact have 2 adblockers on, and I actively ignore and sanitize some of my history (like Youtube) to not get directed towards advertising or other rabbit holes I don't want to see, even though I never click a single ad.
But I do pay for quality journalism / news websites!
Most consumers cannot identify which website they are currently looking at. Google, Facebook, giveuscardinfozzzz.com, all the same. No distinguishing or discernible features or difference.
This is one of the practices I hate the most on the internet.
Sometimes it's so ridiculous that a news site will report about some company and will not have a single link to the company page or will have a link that just points to another previous article about that company.
It has gotten absolutely out of control. I will be reading an article about a new game, and the article won't even have a link to the store page to buy the game...
Which store page should they be linking to? Inevitably what you’re asking for is how we’ve ended up with sites spinning off thousands of articles stuffed full of affiliate links.
Just link to a few? There's a finite set of stores a game is usually on
On PC, it'll be Steam, GOG, maybe Humble. Then on consoles you have Xbox, Playstation and Nintendo. If you wanna put affiliate link, go for it. It's better than no link at all.
These articles already bait my click for ads by never putting the name of the game in the title anyways. At least let me get to the game and buy it.
It’s not about insecurity - it’s more like a user will accidentally click on the link, end up on the company’s site, not realise they’ve left the news site, be confused as to why the news site is trying so hard to sell them a dishwasher, not remember they were just reading an article about them, and will be scared and alienated.
Oh my god, the horror! The user will be confused, not remember, and they will be gasp scared? Alienated?? After accidentally clicking a link??? Jesus Christ the internet is such a scary place.
Most of that stuff like court decisions and patents isn't copyrighted anyway. They can host a copy on their own site and display ads around it if they want to.
Who is this for though? Your average user would not be able to use it or understand the purpose of it. A big image of a padlock with a tick saying “SECURE AND VERIFIED” would be just as effective.
Have to start somewhere. Especially in this age of AI where everything can be faked. Put a "learn more" link that says how they can verify the authenticity. If people aren't interested in learning, that's on them. A padlock doesn't tell you anything. The padlock next the URL just says the connection is encrypted. You used to be able to pay $200 and the cert authority would make you send documents proving you are who you say you are, and it'd show your company name next to the address bar. I went through the process once. It was a good idea IMO, not sure why we got rid of it. Maybe shiesty authorities that weren't doing their due diligence.
The average user doesn't understand TLS but they use it. The average user doesn't understand TCP but they still use it. That doesn't sound like a problem to me. Having signed documents isn't a bad thing.
> Ironically this is also why there is so much existential fear about AI in the media. LLMs will do to them what they do to primary sources (and more likely just cut them out of the loop).
Maybe.. not. LLMs may just flow where the money goes. Open AI has a deal with the FT, etc.
The AI platforms haven't touched any UI devolution at all because they're a hot commodity.
> By the way, a pet peeve of mine right now is that reporters covering court cases (and we have so many of public interest lately) never seem to simply paste the link to the online PDF decision/ruling for us all to read, right in the story.
I have the same peeve, but to give credit where it is due, I've happily noticed that Politico has lately been doing a good job of linking the actual decisions. I just checked for this story, and indeed the document you suggest is linked from the second paragraph:
https://www.politico.com/news/2025/09/02/google-dodges-a-2-5...
I’ve noticed this in New York Times articles in the last couple years. Articles are heavily interlinked now - most “keyword” terms will link to a past article on the same topic - but the links rarely leave the Times’ site. The only exception is when they need to refer back to a prior story that they didn’t cover, but that another publication did. Sources are almost never linked; when they are, it’s to a PDF embed on the Times’ own site.
I assume they and all the other big publications have SEO editors who’ve decided that they need to do it for the sake of their metrics. They understand that if they link to the PDF, everyone will just click the link and leave their site. They’re right about that. But it is annoying.
About a year ago when the NYTimes wrote an article called liked "Who really gets to declare if there is famine in Gaza?", the conclusions of the article were that "well boy it sure is complicated but Gaza is not officially in famine". I found the conclusion and wording suspect.
I went looking to see if they would like to the actual UN and World Food Program reports. The official conclusions were that significant portions of Gaza were already officially in famine, but that not all of Gaza was. The rest of Gaza was just one or two levels below famine, but those levels are called like "Food Emergency" or whatever.
Essentially those lower levels were what any lay person would probably call a famine, but the Times did not mention the other levels or that parts were in the famine level - just that "Gaza is not in famine".
To get to the actual report took 5 or 6 hard-to-find backlinks through other NYTimes articles. Each article loaded with further NYTimes links making it unlikely you'd ever find the real one.
It's true that they do this sort of thing for political reasons, but it sounds like the original NYT report wasn't meant to be merely a paraphrase of a specific UN report? In which case, it would be legitimate to cite other sources and report that they disagree?
No it was a paraphrase of the report and cited no sources that disagreed. It simply sneakily misrepresented the contents and buried the links to the actual report.
The editorial board would probably prefer the NYTimes not get murdered by the current political climate - which of course is part of why the political climate is what it is.
Sure, after you dismiss the pop-up telling you to become an ars subscriber.
I’m only angry about this because I’ve been on ars since 2002, as a paid subscriber for most of that time, but I cancelled last year due to how much enshittification has begun to creep in. These popups remove any doubt about the decision at least.
(I cancelled because I bought a product they gave a positive review for, only to find out they had flat-out lied about its features, and it was painfully obvious in retrospect that the company paid Ars for a positive review. Or they’re so bad at their jobs they let clearly wrong information into their review… I’m not sure which is worse.)
Not just court cases. But so many situations where the primary sources are relevant. Most recently, I’ve seen journalists refer to questionable social media posts that they frame in a certain way but the actual posts don’t align with that frame
We're talking about secondary sources (news papers) linking to primary sources (a PDF of the court ruling). You showed a tertiary source (Google search) linking to a secondary school (BBC).
> By the way, a pet peeve of mine right now is that reporters covering court cases (and we have so many of public interest lately) never seem to simply paste the link to the online PDF decision/ruling for us all to read, right in the story.
Usually I would agree with you, however, the link is in the article hyperlinked under "Amit Mehta" in the 3rd paragraph. Now could the reporter have made that clearer...yes, but it's still there.
As a reporter, I can tell you that your comment stems from a common fallacy: y’all think you know better than reporters what our jobs are and what the dynamics of our publishing platform entail.
For some reason, everyone feels like they would know how to be a journalist better than the actual professionals.
That said, reporters have most probably nothing to do with what you’re decrying. Linking policies are not the reporter’s business.
There are probably multiple layers of SEO “experts” and upper management deciding what goes on page and what not.
Funnily enough, they might be super anal about what the story links, and then let Taboola link the worst shit on the Internet under each piece…
So please, when you start your sentence with “reporters” please know that you’re criticizing something they have no power to change.
How is providing factual information (e.g., "The full court ruling is available at https://court.rulings/case_123456.pdf", or at least "The case is number 123456.") not part of the reporter's job? No need to link to it, just provide the fact.
I sympathize with how annoying it must be to have other people messing up your work, but also, if your name is at the top of the page, and there's not really any other way for readers to know anyone in particular that is taking responsibility for any specific detail on that page, it's obviously going to be your reputation on the line to some extent.
It doesn’t matter. From the general population point of view, whoever writes the article is the “reporter”, and “they” don’t provide the links. You can argue otherwise and it won’t change the optics.
I don't really care if you think people don't understand details of the job you do, or the system in which you operate. Your name is on the article and it's my expectation at this point that someone telling a story give me the original source when it's easily available. I don't need to know the complications or reasons why it isn't done, I want the right outcome.
If anything, you should be helping to cut through the BS layers and insisting that the original source link (or, even just the full name of the court case) be included with your reporting.
> I think it should be adopted as standard journalistic practice in fact -- reporting on court rulings must come with the PDF.
Bafflingly, I’ve found this practice to continue even in places like University PR articles describing new papers. Linking to the paper itself is an obvious thing to do, yet many of them won’t even do that.
In addition to playing games to avoid outbound links, I think this practice comes from old journalistic ideals that the journalist is the communicator of the information and therefore including the source directly is not necessary. They want to be the center of the communication and want you to get the information through them.
I would go so far as to inherently mistrust any legal reporting that does not link to the ruling or trial footage at this point. I've watched multiple public trials and seen reporting that simply did not reflect what actually went on.
> I would rather be able to see the probably dozens of pages ruling with the full details rather than hear it secondhand from a reporter at this point
And the reporter would rather you hear it second hand from them :)
I agree, online "journalists" are absolutely terrible at linking to sources. You'll have articles which literally just cover a video (a filmed press conference, a YouTube video, whatever) that's freely available online and then fail to link to said video.
I don't know what they're teaching at journalistic ethics courses these days. "Provide sources where possible" sounds like it should be like rule 1, yet it never happens.
It is also meant to lessen the legal burden: when they don't link to primary source, nobody can claim the is inaccurate, missing essential facts or made up.
I can’t comment on this as I don’t know the case-law well, but I’m struggling to understand how citing but not linking to a source lessens the ability of anyone to make claims about accuracy, whether in a court of law or in the court of public opinion. Can you provide details?
There is a link right there in 3rd paragraph: "U.S. District Judge Amit Mehta", though strangely under the name...
> I would rather be able to see the probably dozens of pages ruling with the full details rather than hear it secondhand from a reporter at this point.
There is no way you'd have time for that (and more importantly, your average reader), but if you do, the extra time it'd take you to find the link is ~0.0% of the total extra time needed to read the decision directly, so that's fine?
> with the full details
You don't have them in those dozens of pages, for example, the very basics of judge's ideological biases are not included.
Actual answer is that majority journalists are summarizing other journalists who are summarizing someone they asked about the original content. They have never seen it themselves so can't link it.
> I am not too familiar with antitrust precedent, but to what extent does the judge rule on how specific the data sharing need to be (what types of data, for what time span, how anonymized, etc. etc.) or appoint a special master? Why is that up to the judge versus the FTC or whoever to propose?
The judge doesn't propose, he rules on what the parties propose, and that can be an iterative process in complex cases. E.g.. in this case, he has set some parameters in this ruling, and set a date by which the parties need to meet on the details within those parameters.
They don't necessarily want to be the gatekeepers of information, they just want your next click to be another news story on their website.
External links are bad for user retention/addiction.
This also has a side effect of back linking no longer being a measure of a 'good' website, so good quality content from inconsistently trafficked sites gets buried on search results.
It's simple - the reason there's no PDF is that most people don't want one. If they did, the reporter would be incentivized to include it. You're complaining about them not serving a tiny, tiny minority of readers.
> reporters covering court cases (and we have so many of public interest lately) never seem to simply paste the link to the online PDF
Would note that this significantly varies based on whether it's ad-driven or subscription-based/paywalled. The former has no incentive to let you leave. The latter is trying to retain your business.
I feel like way too many journalists or editors still see hyperlinks as a way of "sending traffic to competitors", and as such to be avoided at all cost.
I've noticed this too and I agree it's unacceptable practice. Journalism in general has become wildly resistant to properly citing their sources (or they simply make their citation as difficult to find as possible through various obfuscation techniques) and this is making independent validation of any information online that much more difficult while further entrenching a culture of "just trust me, bro" on the internet in general. It's a deeply infuriating and destructive practice that needs to die out. At least when I was in school & university, properly citing your sources was everything when it came to writing any sort of report or essay. How the adtech industry managed to quietly undo that standard expectation so thoroughly for the sake of engagement metrics is rather nuts to me.
This is my pet peeve about most news articles. Give. me. raw. sources. Not edited clips, RAW CLIPS AND LINKS. I'm so tired of sensationalized news, I always look for raw sources or I assume there's a spin on the story. If someone made a news site that actually gave you raw sources, I would subscribe to them for life.
By the way, the worst laughable offenders of this idea are local TV news stations. As if to get the real insight on some world issue, I'm going to "stay up to date by going to KTVU.com for the latest on this breaking story!".
That's certainly no more accurate of the news division of KTVU, a local Fox-owned station, than it is of the the national “News” network with the same corporate parent.
Journalists actively hinder readers from finding the primary source because their coverage outranks it. If readers regularly saw the primary source they would realize how dishonest the Journalists are.
> they must have been using the ruling PDF to write the story
Oh you sweet Summer child :-)
The worst is with criminal cases where they can't even be burdened to write what the actual charges are. It's just some vague 'crime' and the charges aren't even summarized - they're just ignored.
Is it not sad/telling that the reporter of the story couldn't summarize this in the story, but the bot here can? If there were an indicator of the future to come...
Is the incorrectly shared mail piece addressed to someone with a quite similar address, or potentially someone who previously lived there?
Just having thought once in a while about how complicated addresses are, I can only imagine all the things that can go wrong. (both for the post office, and for example, credit cards/banks that have to use addresses in validation of purchases, etc)
Imagine an apartment building with many units. Think of how people differently specify on the address lines which unit they live in? What if they leave off their unit #? What about apartments that are numbered "345 1/2 Second Street"?
What about a new person with the same last name that appears at an address? What do you do about that? Is an address that differs by a very subtle letter a different household? E.g. "345b Second Street"? Should you ship a package there or approve a credit card, or is that likely to be an attempt to fraudulently divert mail to someone else who is nonexistent?
I'm sure it's endlessly complicated, and I have no idea. But I know it will be complicated.