Hacker Newsnew | past | comments | ask | show | jobs | submit | jbritton's commentslogin

The Reddit example is about two different design choices. The DOM is a tree of state that needs to stay in sync with your app state. So how to make that happen without turning your code into a mess. The old Reddit had to first construct the DOM and then for every state change, determine what DOM nodes need to change, find them and update them. Knowing what needs to change gets ugly in a lot of apps. The other alternative is to realize that constructing a DOM from any arbitrary state is pretty much the same as constructing it from initial state. But now you don’t have to track what DOM nodes must change on every state change. This is a massive reduction in code complexity. I will grant that there is something similar to the “expression” problem. Every time there is a new state element introduced it may affect the creation of every node in the DOM. As opposed to every time a UI element is added it may affect every state transition. The first Reddit can be fast, but you have to manage all the updates. The second is slow, but easier to develop. I’m not sure going any lower solves any of that. The React version can be made more efficient through intelligent compilers that are at better at detecting change and doing updates. The React model allows for tooling optimizations. These might well beat hand written changes. The web has complexity also of client/server with long delays and syncing client/server and DOM state, and http protocol. Desktop apps and game engines don’t have these problems.

The thing is that you can still have high-level abstractions without them needing to be as slow as React. React does a slow thing by default (rerendering every child component whenever state changes, so every component in the UI if top-level state is changing), and then requires careful optimisation to correct for that decision.

But you can also just... update the right DOM element directly, whenever a state changes that would cause it to be updated. You don't need to create mountains of VDOM only to throw it away, nor do you need to rerender entire components.

This is how SolidJS, Svelte, and more recently Vue work. They use signals and effects to track which state is used in which parts of the application, and update only the necessary parts of the DOM. The result is significantly more performant, especially for deeply nested component trees, because you're just doing way less work in total. But the kicker is that these frameworks aren't any less high-level or easy-to-use. SolidJS looks basically the same as React, just with some of the intermediate computations wrapped in functions. Vue is one of the most popular frameworks around. And yet all three perform at a similar level to if you'd built the application using optimal vanilla JavaScript.


We measure computer performance in the billions and trillions of ops per second. I'm sorry but if it an app takes 200ms to hide some comments, the app or the tech stack it's on is badly made.

> The web has complexity also of client/server with long delays and syncing client/server and DOM state, and http protocol. Desktop apps and game engines don’t have these problems.

Hugely multiplayer games consistently update at under 16ms.


I don't know why people always bring up "but networks! but client/server!" when I point out that 100% client-side behavior is slow on the web.

> The web has complexity also of client/server with long delays and syncing client/server and DOM state, and http protocol. Desktop apps and game engines don’t have these problems.

What part of hiding a comment requires a HTTP round trip? In 200ms you could do 20 round trips.


I'm fairly confident that the new reddit React implementation can be improved in performance by a factor of 3x to 10x. I would be interested to hear others who have good reason to explain why not. I can certainly imagine React-like systems that are capable of statically determining DOM influence sufficient to make comment-collapsing negligible.

It is blatantly obvious to anyone with just a little bit of experience that the reddit devs barely know what they are doing. This applies to their frontend as well as backend. For some reason, reddit is also the only major social network where downtime is expected. Reddit throwing 500 errors under load happens literally every week.

Presumably the mobile apps works better; they don’t care very much about the website because they want to push everyone to the app anyway.

Reddit also puts the "eventually" in "eventually consistent". Not in the sense of consistency being driven by events, but in the colloquial sense of "someday". The new message indicator will go away ... eventually. Your comment will show up ... eventually.

I had a conversation with Claude about what language to work in. It was a web app and it led me to Typescript mainly because of the training data for the model, plus typing and being able to write pure functions. Haskell might have been preferred except for the lower amounts of training data.

I have often thought layouts should be done by a constraint solver. Then there could be libraries that help simplify specifying a layout, which feed constraints to the solver.



I've done that for desktop apps before. You have to be careful with the effects of sub-pixel rendering and whatnot if your math is continuous, but it's a viable path that I quite like.


Don't use continuous math in either a design system nor a constraint solver that you expect random developers to use. Either case will only lead to problems.


I largely agree, but there's a little nuance insofar as "interior-point" methods are very powerful. You can go a long way by encoding your goals as error functions and letting a gradient-based optimizer do the rest.


iOS used to do this using the Cassowary constraint solver pre-SwiftUI. It’s the worst thing to work with. So much code turning on and off constraints, dynamically adding constraints when you have new views. And that’s before you get into conflicts


I was thinking about C++ and if you change your mind about whether some member function or parameter should be const, it can be quite the pain to manually refactor. And good refactoring tools can make this go away. Maybe they already have, I haven’t programmed C++ for several years.


These were paper trades, not actual trades. Sometimes a few seconds can significantly effect the price one gets.


Some have realized that LLMs don’t really reason and that LLMs may not be as amazing as the claims. However, I think LLMs plus agents, plus advances that are likely to come may very well prove to be more valuable than anyone can foresee. It’s very difficult to predict the future profitability of tech.


> LLMs don’t really reason

Do you have a test for this?

Or is it based on the presumption that reasoning skills cannot evolve, it can only be the result of "intelligent design"?


I have been spent many hours with them on coding tasks. As things currently stand once context or complexity reaches a certain point they become completely incapable of solving problems and that point occurs on very simple things. They appear completely brain dead at times although they are magnificent liars at making you think they understand the problem. Although, I recently got chatGPT 5 to solve a problem in a couple hours that Claude Sonnet 4 was simply never going to solve. So they are improving. I don’t know the limits. I’m more hopeful that a feedback loop with specialized agents will take things much further. I’m extremely skeptical that getting bigger context windows and larger models is going to get us reasoning. The skepticism comes from observations. Clearly no one knows how thinking actually works. I don’t know how to address the evolve part. LLMs don’t directly mutate and have selective pressure like living organisms. Maybe a simulation could be made to do that.


Gotham Chess has done chat bot chess championships. The chat bots make a few good moves, and then begin making illegal moves, randomly removing or adding pieces, and completing ignoring obvious threats and attacks. It is so obvious that the pattern matching is not resulting in reasoning. Another example is Towers of Hanoi. An LLM can write code to solve it, because that’s an easy pattern match. But it can’t write out the steps beyond a 3 disk puzzle. It has no understanding of the recursive nature of the problem.


>> the presumption that reasoning skills cannot evolve

> I don’t know how to address the evolve part. LLMs don’t directly mutate and have selective pressure like living organisms.

Sorry, that was poorly worded. I meant "can reasoning skills not be evolved through the neural net training phase?"

Sure, once you deploy an LLM, it does not evolve any more.

But let's say you have a person Tom with 5-minute short term memory loss, meaning he can't ever remember more than 5 minutes back. His reasoning skills are completely static, just based on his previous education before the memory loss accident, and the last 5 minutes.

Is "5-minute Tom" incapable of reasoning because he can't learn new things?

> They appear completely brain dead at times

Yes, definitely. But they also manage to produce what looks like actual reasoning in other cases. Meaning, "reasoning, not pattern matching".

So if a thing can reason at some times and in some cases, but not in other, what do we call that?

An LLM is a lot like a regular CPU. A CPU basically just operates step by step, where it takes inputs, a state memory, and stored read-only data, puts those into combinatorical logic to calculate new outputs and updates to the state memory.

An LLM does the same thing. It runs step by step, takes the user input+its own previous output tokens and stored readonly data, puts those into a huge neural network to perform processing to generate the next output token.

The "state memory" in an LLM (=the context window) is a lot more limited than a CPU RAM+disk, but it's still a state memory.

So I don't have a problem imagining that an LLM can perform some level of reasoning. Limited and flawed for sure, but still a different creature than "pattern matching".


Can you post your experiments please?


I don’t have a log of work that I can post.


Would you agree that many humans, especially younger ones, are also incapable of reasoning?


I think it’s completely uncontroversial that younger humans are incapable of reasoning, no? The only area for discussion is at which age (if any) this changes for an individual.


I would say that's a pretty controversial take unless you strictly mean babies. At the point a kid can walk they are certainly capable of some level of reasoning because just walking successfully prior to it becoming muscle memory requires a fair amount of reasoning and children very rapidly figure out new things both spatial and logical.


So if you believe babies do not reason, and walking humans do, then younger humans up to some age do not reason, yes?


And dogs. Looks like all animals are rational by their definition.


A CIMT scan is another option. It uses ultrasound to measure carotid artery wall thickness.


ICE is targeting groups of people based upon a probability that some are illegal. This violates the 4th amendment. The 4th amendment requires reasonable suspicion of a specific individual. This is also why they don’t have warrants, because no judge would grant it. The LA courts have ruled their activity illegal. The federal government is appealing. The SCOTUS ruling allows ICE to continue doing what they are doing until the appeal is heard.


If Tesla has a “no charge” list, they should at a minimum notify Carfax. However, everybody buying a used Tesla needs to be knowledgeable about this caveat. Tesla has a duty to inform the public. How they do this I have no idea. Perhaps the vehicle should never have been released back to the owner if it’s so unsafe. I don’t how that would work legally and financially.


Something that might be useful would be a sub language that didn’t support all the dynamic features that make JIT difficult or slow. Perhaps a module could have a pragma or something to indicate the language set in use. Maybe like Racket. Simply not being able to add new methods or new member variables after initialization would help.


This. Maybe we could call a method of the JIT (it must be exposed) and tell it that we won't use some features of Ruby, globally or inside that scope. That would let it skip some checks. Of course calling that method takes time so it should be something that is called only once. It depends on how the JIT accesses Ruby code.

And if the code actually does what we declared or must not do, we accept any runtime error that it might happen.

We must trust dependencies, including Rails if it's a Rails app.


With such constraints, it should also be possible to compile it into a native binary, and then it is very similar Crystal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: