Hacker Newsnew | past | comments | ask | show | jobs | submit | akavi's commentslogin

A relational querying DSL: https://github.com/akavi/yarrql/

“Compiles” to SQL, but with a different structural paradigm.


Only if you believe the primary purpose of a corporation is to provide employment, as opposed to generating profit for its shareholders.

(To be clear, I think the latter is both descriptively true and normatively good)


if corporations only exist to make the rich richer, maybe it's time to eat the rich. Corporations' outward goals used to be to satisfy their customers. That may have never been the internal case, but it isn't even pretended to be so nowadays.

Severely downsizing the company isnt a good vibe to a customer. I'd definitely be migrating off Vimeo if I did any serious business with them.


You are aware that insofar as AI chat apps are "hallucinatory text generator(s)", then so is Google Translate, right?

(while AFAICT Google hasn't explicitly said so, it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT)


> it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT

The objective of that model, however, is quite different to that of an LLM.


I have seen Google Translate hallucinate exactly zero times over thousands of queries over the years. Meanwhile, LLMs emit garbage roughly 1/3 of the time, in my experience. Can you provide an example of Translate hallucinating something?


Agreed, and I use G translate daily to handle living in a country where 95% of the population doesn’t speak any language I do.

It occasionally messes up, but not by hallucinating, usually grammar salad because what I put into it was somewhat ambiguous. It’s also terrible with genders in Romance languages, but then that is a nightmare for humans too.

Palmada palmada bot.


Every single time it mistranslates something it is hallucinations.


Google Translate hasn't moved to LLM-style translation yet, unfortunately


Hmmm, that doesn't seem right. I'm having a hard time finding an actual consumption number, but I am confident it's well below 50%.

The top 10% of households by wage income do receive ~50% of pre-tax wage income, but:

1) our tax system is progressive, so actual net income share is less

2) there's significant post-wage redistribution (social security/medicaid)

3) that high income households consume a smaller percent of their net income is a well established fact.



The bus (and the subway) in NYC are also already heavily subsidized. There is also already heavily subsidized childcare in NYC (3k, preK).

The article in general takes the approach of listing a small handful of (usually very small) polities that have one of Mamdani's proposed policies, and then claim that the full suite is therefore "normal" across Europe.


PD's been tolerant to total AZ failures for years (was an early eng there)


> We're going to stabilize around 10 billion by 2080 according to projections and then decline, hopefully reaching some kind of Star Trek utopia at some point.

10 billion is gonna be the high end by the looks of things, and that decline is going to be hardly conducive to utopia. The math of dependency ratios is inescapably painful.


> I think it's more likely, drawing from biology, that we end up at a stable global population level without having to worry about moving backwards along the metrics of education, income or contraceptive access.

There's absolutely no inherent equilibrating force that will stabilize global fertility rates at replacement. Many countries have blown by replacement (the USA included) and continue on a downward trend year over year.


And if even cultural norms were reversed to pro-birth, it wouldn't be enough to reverse the trends, as the decline is compounding, and the increase of average age produces other complications (hard economic issues for starters, making people even more hesitant).



I'd actually bet against this. The "bitter lesson" suggests doing things end-to-end in-model will (eventually, with sufficient data) outcompete building things outside of models.

My understanding is that GPT5 already does this by varying the quantity of CoT done (in addition to the kind of super-model-level routing described in the post), and I strongly suspect it's only going to get more sophisticated


The bitter lesson type of strategy would be to implement heterogeneous experts inside an MoE architecture so that the model automatically chooses the number of active parameters by routing to experts with more parameters.

This approach is much more efficient than the paper of this HN submission, because request based routing requires you to recalculate the KV cache from scratch as you switch from model to model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: