Hacker Newsnew | past | comments | ask | show | jobs | submit | dimtion's commentslogin

Both those questions are answered clearly in the readme:

> compared to avante > I think it's fairly similar. However, magenta.nvim is written in typescript and uses the sdks to implement streaming, which I think makes it more stable. I think the main advantage is the architecture is very clean so it should be easy to extend the functionality. Between typescript, sdks and the architecture, I think my velocity is pretty high. I haven't used avante in a while so I'm not sure how close I got feature-wise, but it should be fairly close, and only after a couple of weeks of development time.

And:

> Another thing that's probably glaringly missing is model selection and customization of keymappings, etc...


thank you - missed that section at the bottom on my phone


Without knowing exactly how createNewGroup and addFileToGroup are implemented it is hard to tell, but it looks like the code snippet has a bug where the last group created is never pushed to groups variable.

I'm surprised this "senior developer AI reviewer" did not caught this bug...


With huge blobs of binary model weights, dynamic linking is cool again.


Dynamic linking has always been cool for writing plugins.

It is kind of ironic that languages that praise so much for going back to early linking models, have to resort for much heavier OS IPC for similar capabilities.


Which languages?

IIUC Go and Rust resort to OS IPC based plugin system mainly because they refused to have a stable ABI.

On the other hand, at $DAYJOB we have a query engine written in C++ (which itself uses mostly static linking [1]) loading mostly static linked UDFs and ... it works.

[1] Without glibc, but with libstdc++ / libgcc etc.


Doesn’t rust’s static linking also have to do with the strategy of aggressive minimization? Iirc for instance every concrete instantiation of a dynamic type will have its own compiled binary, so it would basically be impossible for a dynamic library to do this since it wouldn’t know how it would be used, at least not without some major limitations or performance tradeoffs


Well if it loads code dynamically, it is no longer static linking.

Also it isn't as if there is a stable ABI for C and C++ either, unless everything is compiled with the same compiler, or using Windows like dynamic libraries, or something like COM to work around the ABI limitations.


Which Apple has put some pretty large effort in the last few years to improve in iOS


Hosting some services on a vps provides far better availability of data, than doing it at home. Especially when you need it the most. For example when you are moving to a new home, and need to save documents, or you are abroad, need a document and there is a power outage at home.

Even if you have to trust a 3rd party with your data, (1) you can minimize the privacy risk with encryption and (2) usually VPS/cloud providers have different privacy guaranties than free Google drive...


Or, businesses could do like they do in Europe: set the same price after taxes in every state.


It is interesting because SMART being "niche jargon term" is very dependent of the audience. Nowhere the author needs to assume that the post will be posted on HN where the audience is probably larger than the originally intended one.

In many places NixOS is a niche jargon term.

Also, in the first paragraph of the article there is a direct link to smartd that has a clear explanation for what smart is, no need to Google.


It is called the Brussels Effect[0]

Put simply, given how large the EU market is, and how strong its regulation is, it is cheaper for companies to comply, even outside of the market.

[0]: https://en.wikipedia.org/wiki/Brussels_effect


This explains why they comply to the EU regulations even in the USA, but not why they don't try to lobby more in the EU to avoid those regulations in the first place.


One reason is that there are more hands in the cookie jar in the EU. Less so in the US.


Given your last sentence incidental complexity can be created and destroyed (and is more difficult to destroy than create).

The quote would probably be more accurate as:

> "ESSENTIAL Complexity can neither be created nor destroyed, only moved somewhere else."


Essential complexity can also be created and destroyed, though sometimes it happens earlier in the design process. Picking the problem you choose to solve is how you control essential complexity.


Essential complexity is inherent to the problem you have. The solutiom is layered between product design, technical design and implementation. What is essential complexity for a layer can be accidental for the layer above.


That just makes it a tautology. It basically says “essential complexity exists”.


It's often a matter of framing. When you abstract, refactor or move complexity it should serve to make the rest of the system/application easier to understand or for those adding features into the application(s) to be more productive as a whole. If it doesn't do one of those things, you probably shouldn't do it.

It's one thing to shift complexity to a different group/team, such as managing microservice infrastructure and orchestration that can specialize in that aspect of many applications/systems. It's very different to create those abstractions and separations when the same people will be doing the work on both sides, as this adds cognitive overhead to the existing team(s). Especially if you aren't in a scenario where your application performance overhead is in eminent danger of suffering as a whole.


It's a frame of mind.

Often a developer will see something big or complex and see it as a problem.

But they should consider whether this matches the size/scope of the problem being solved


> But they should consider whether this matches the size/scope of the problem being solved

In professional software development projects, specially legacy projects, often times the complexity is not justified by the problem. There is always technical debt piling up, and eventually it starts getting in the way.


> often times the complexity is not justified by the problem.

Often times means not always -- what would you say some projects are doing right so that their complexity is justified by the problem?


Maybe but it still useful to know that essential complexity exists and to identify it in your project.


Yes :)

That is the more precise adaptation of Kelsers Law.

Obviously you can always add also superfluous complexity :)


We are starting to get there: https://viper.cs.columbia.edu/


Note that this is from October 2021.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: