Hacker Newsnew | past | comments | ask | show | jobs | submit | john01dav's commentslogin

> New techniques are often behind proprietary gates, with shallow papers and slides that only give a hint of how things may work.

I've been able to implement techniques based on such things without too much trouble. Also, Unreal is source available, although I haven't used its source to learn, and haven't checked the license for risks with doing so.


I think that the person to whom you replied is speaking of outdoor installations, while you are speaking of controlled (maybe datacenter) installations. I have outdoor fiber running aerially between buildings on my property, in a region with massive seasonal temperature changes. Multiple local FTTH and Coaxial ISPs also run fiber on shared utility poles (the same ones that the electrical grid maintains) and when I look at the poles I see communications lines all in the same general area, often mere centimeters apart, if that.

> I feel there has to be something between "I heard about a thing 7th-hand" and "I actively watch political discourse / read scientific papers", but I'm no longer sure The News, as we currently know it, is it.

I have found that some Youtube channels and videos (non-comprehensive examples below (I have hundreds of subscribed channels), mostly not politics, but these things inform politics since politics is making decisions about other things) can fill this gap nicely. This is not a perfect choice, since journalism integrity and standards do not apply, but I find that this can be mitigated by watching a wide variety (for example, in the field of economics, I regularly watch creators who espouse everything from very free-market capitalism all the way to full on communism). There are likely other forms of new media that operate at this level of depth, but I haven't found htem.

https://www.youtube.com/@TechnologyConnections

https://www.youtube.com/watch?v=FWUaS5a50DI

https://www.youtube.com/@HowMoneyWorks

https://www.youtube.com/@DiamondNestEgg

https://www.youtube.com/@TLDRnews (and associated channels)

https://www.youtube.com/@BennJordan (recent good example https://www.youtube.com/watch?v=vU1-uiUlHTo)


I started watching the full press releases and politicians interviews which are normally available on YouTube. It just changed how I view geopolitics. The media is extremely biased and absolutely does not report what people are actually saying. You really should never accept at face value what the news are reporting.

> I started watching the full press releases and politicians interviews which are normally available on YouTube.

Is this true for Australian politics? This is exactly what I'm looking for. Currently all my searching for recent events just results in summarised/paraphrased news reports with some footage, or shorts and clickbait.


Parliamentary question time is pretty good for that here in Australia, I'd recommend giving it a listen every now and again.

Yes and the Senate has a radio IIRC and you can listen to every discussion they have!

Thank you for these sources! Happy to see Benn Jordan, How Money Works and Technology Connections as grey links :-)

> I am tired of fighting my own OS.

People cite bugs or incompatible software on Linux as a reason to avoid it and use Windows, but they fail to recgonize that Windows actively fights you. I'd take something that's slightly and mistakenly broken on an utterly open platform where I can fix it if I care enough over a closed platform that's actively trying to screw me over.


US already banned Chinese EVs, even though, from what I've heard, they're excellent

For now, such hardware is readily available. Every Walmart, for example, will have it. Amazon has it. Pcpartpicker lists numerous other places that you can buy it from.

That would be a horrifying violation of bodily autonomy.

This doesn't mean that it won't happen, but it does make it especially vile if it does.


> - I like languages that let you decide how much you need to "prove it."

Rust is known for being very "prove it," as you put it, but I think that it is not, and it exposes a weakness in your perspective here. In particular, Rust lets you be lax about types (Any) or other proved constraints (borrow checker bypass by unsafe, Arc, or cloning), but it forces you to decide how the unproven constraints are handled (ranging from undefined behavior to doing what you probably want with performance trade-offs). A langauge that simply lets you not prove it still must choose one of these approaches to run, but you will be less aware of what is chosen and unable to pick the right one for your use case. Writing something with, for example, Arc, .clone(), or Any is almost as easy as writing it in something like Python at the start (just arbitrarily pick one approach and go with it), but you get the aforementioned advantages and it scales better (the reader can instantly see (instead of dredging through the code to try to figure it out) "oh, this could be any type" or "oh, this is taken by ownership, so no spooky action at a distance is likely").


In practice, though, writing stuff with `Arc` or `.clone()` or `Any` is not as easy as it would be in Python because you've got to write a bunch of extra boilerplate. It's much easier to have local `i64` values that you can pass around as `Copy`. So if all you need are local i64s, then you'll take the easier option and do that.

The same is true at multiple levels. `.clone()` is relatively easy to use, although once you learn the basic rules for referencing, that also becomes easier. `Arc` solves a specific problem you run into at a certain point sharing data between threads, but if you're not sharing data between threads (and most of the time you're not), it's just boilerplate and confusing, so you might avoid it and at worst use `Rc`. `Any` is rarely an obvious choice for most contexts, you really are going to only use it when you need it.

The result is that for most simple cases, the precise and "proven" option is typically the easiest to go for. When you deal with more complicated things, the more complicated tools are available to you. That seems exactly what the previous poster described, where you can decide yourself how much you need to prove a given thing.


is that not exactly "deciding how much to prove it?"


> Wherever LLM-generated code is used, it becomes the responsibility of the engineer. As part of this process of taking responsibility, self-review becomes essential: LLM-generated code should not be reviewed by others if the responsible engineer has not themselves reviewed it. Moreover, once in the loop of peer review, generation should more or less be removed: if code review comments are addressed by wholesale re-generation, iterative review becomes impossible.

My general procedure for using an LLM to write code, which is in the spirit of what is advocated here, is:

1) First, feed in the existing relevant code into an LLM. This is usually just a few source files in a larger project

2) Describe what I want to do, either giving an architecture or letting the LLM generate one. I tell it to not write code at this point.

3) Let it speak about the plan, and make sure that I like it. I will converse to address any deficiencies that I see, and I almost always do.

4) I then tell it to generate the code

5) I skim & test the code to see if it's generally correct, and have it make corrections as needed

6) Closely read the entire generated artifact at this point, and make manual corrections (occasionally automatic corrections like "replace all C style casts with the appropriate C++ style casts" then a review of the diff)

The hardest part for me is #6, where I feel a strong emotional bias towards not doing it, since I am not yet aware of any errors compelling such action.

This allows me to operate at a higher level of abstraction (architecture) and remove the drudgery of turning an architectural idea into written, precise, code. But, when doing so, you are abandoning those details to a non-deterministic system. This is different from, for example, using a compiler or higher level VM language. With these other tools, you can understand how they work and rapidly have a good idea of what you're going to get, and you have robust assurances. Understanding LLMs helps, but thus not to the same degree.


I've found that your step 6 takes the vast majority of the time I spend programming with LLMs. Like 10X+ the combined total of time steps 1-5 take. And that's if the code the LLM produced actually works. If it doesn't work (which happens quite often), then even more handholding and corrections are needed. It's really a grind. I'm still not sure whether I am net saving time using these tools.

I always wonder about the people who say LLMs save them so much time: Do you just accept the edits they make without reviewing each and every line?


You can have the tool start by writing an implementation plan describing the overall approach and key details including references, snippets of code, task list, etc. That is much faster than a raw diff to review and refine to make sure it matches your intent. Once that's acceptable the changes are quick, and having the machine do a few rounds of refinement to make sure the diff vs HEAD matches the plan helps iron out some of the easy issues before human eyes show up. The final review is then easier because you are only checking for smaller issues and consistency with the plan that you already signed off on.

It's not magic though, this still takes some time to do.


I exclusively use the autocomplete in cursor. I hate reviewing huge chunks of llm code at one time. With the autocomplete, I’m in full control of the larger design and am able to quickly review each piece of llm code. Very often it generates what I was going to type myself.

Anything that involves math or complicated conditions I take extra time on.

I feel I’m getting code written 2 to 3 times faster this way while maintaining high quality and confidence


This is my preferred way as well. And when you think about it, it makes sense. With advanced autocomplete you are:

1. Keeping the context very small 2. Keeping the scope of the output very small

With the added benefit of keeping you in the flow state (and in my experience making it more enjoyable).

To anyone that even hates LLMs give autocomplete a shot (with a keying to toggle it if it annoys you, sometimes it’s awful). It’s really no different than typing it manually wrt quality etc, so the speed up isn’t huge, but it feels a lot nicer.


Maybe it subjectively feels like 2-3x faster but in studies that measure it we tend to see smaller improvements like in the range of 20-30% faster. It could be that you are an outlier, of course.


2-3x faster on getting the code written. Fully completing a coding task maybe only 20-30% faster, if we count chasing down requirements, reviews, waiting for CI to pass so I can merge etc.


If it's stuff I have have been doing for years and isn't terribly complex I've found its generally quick to skim review. I don't need to read every line I can glance at it, know it's a loop and why, a function call or whatever. If I see something unusual I take that as an opportunity to learn.

I've seen LLMs write some really bad code a few times lately it seems almost worse than what they were doing 6 or 8 months ago. Could be my imagination but it seems that way.


Insert before 4: make it generate tests that fail, review, then have it implement and make sure the tests pass.

Insert before that: have it creates tasks with beads and force it to let you review before marking a task complete


Don’t make manual corrections.

If you keep all edits to be driven by the LLM, you can use that knowledge later in the session or ask your model to commit the guidelines to long term memory.


The best way to get an LLM to follow style is to make sure that this style is evident in the codebase. Excessive instructions (whether through memories or AGENT.md) do not help as much.

Personally, I absolutely hate instructing agents to make corrections. It's like pushing a wet noodle. If there is lots to correct, fix one or two cases manually and tell the LLM to follow that pattern.

https://www.humanlayer.dev/blog/writing-a-good-claude-md


How the heck it does not upset your engineering pride and integrity, to limit your own contribution to verifying and touching up machine slop, is beyond me.

You obviously cannot emotionally identify with the code you produce this way; the ownership you might feel towards such code is nowhere near what meticulously hand-written code elicits.


He speaks of trust and LLMs breaking that trust. Is this not what you mean, but by another name?

> First, to those who can recognize an LLM’s reveals (an expanding demographic!), it’s just embarrassing — it’s as if the writer is walking around with their intellectual fly open. But there are deeper problems: LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too? The reader can’t be sure — and increasingly, the hallmarks of LLM generation cause readers to turn off (or worse).

> Specifically, we must be careful to not use LLMs in such a way as to undermine the trust that we have in one another

> our writing is an important vessel for building trust — and that trust can be quickly eroded if we are not speaking with our own voice


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: