I don't mind when other programmers use AI, and use it myself. What I mind is the abdication of responsibility for the code or result. I don't think that we should be issuing a disclaimer when we use AI any more than when I used grep to do the log search. If we use it, we own the result of it as a tool and need to treat it as such. Extra important for generated code.
Isn't this what Brooks describes, stating more than 50 years ago that there's a fundamental shift when a system can no longer be held in a single mind, and the communication & coordination load that results from adding people? It seems with a single person offloading the work to an LLM right at the start they give up this efficiency before even beginning, so unless you're getting AI to do all the work it will eventually bite you...
Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.
— Melvin E. Conway, How Do Committees Invent?
> there's a fundamental shift when a system can no longer be held in a single mind
Should LLM users invest in both biological (e.g. memory palace) and silicon memory caches?
This law took several years to understand for me. Early on, I'd come into an infra situation and kind of incredulously say stuff like "Why not do (incredibly obvious thing)?" and be frustrated by it quite often.
Usually it's not because people think it can't be done, or shouldn't be done, it's because of this law. Like yes in an ideal world we'd do xyz, but department head of product A is a complete anti-productive bozo that no one wants to talk to or deal with, so we'll engineer around him kind of a thing. It's incredibly common once you see it play out you'll see it everywhere.
This analysis of the real-world effects of Conway's Law seems deeply horrifying, because the implication seems to be that there's nothing you can do to keep communication efficiency and design quality high while also growing an organisation.
I think you better link to a good article instead. Good grief, what a horror. A talking head rambling on for 60 minutes.
---
disclaimer: if low information density is your thing, then your mileage may vary. Video's are for documentaries, not for reading out an article in the camera.
After opening the "transcript" on these kinds of videos (from a link in the description, which may need to be expanded), a few lines of JavaScript can extract the actual transcript text without needing to wrestle with browser copy-and-paste. Presumably the entire process could be automated, without even visiting the link in a browser.
And probably a few minutes of commercials too. I get the impression this is an emerging generational thing, but unless it's a recorded university course or a very interesting and reputable person.. no thanks. What is weird is that the instinct to prefer video seems motivated by laziness, and laziness is actually an adaptive thing to deal with information overload.. yet this noble impulse is clearly self-defeating in this circumstance. Why wait and/or click-through ads for something that's low-density in the first place, you can't search, etc.
Especially now that you can transcript the video and quickly get AI to clean it up into a post, creating/linking a video potentially telegraphs stuff like: nothing much to say but a strong desire to be in the spotlight / narcissism / an acquisitiveness for clicks / engagement. Patiently enduring infinite ads while you're pursuing educational goals and assuming others are willing to, or assuming other people are paying for ad-free just because you do, all telegraphs a lack of respect for the audience, maybe also a lack of self-respect. Nothing against OP or this video in particular. More like a PSA about how this might come across to other people, because I can't be the only person that feels this way.
Always and entirely subjective of course, but I find Casey Muratori to be both interesting and reputable.
> What is weird is that the instinct to prefer video seems motivated by laziness, and laziness is actually an adaptive thing to deal with information overload...
What's even weirder is the instinct to not actually engage with the content of the linked video and a discussion on Conway's Law and organisational efficiency and instead head straight into a monologue about some kind of emerging generational phenomenon of laziness highlighted by a supposed preference for long video content, which seems somewhat ironic itself as ignoring the original subject matter to just post your preferences as 'PSA' is its own kind of laziness. To each their own I guess.
Although I do think the six-hour YouTube 'essays' really could do with some serious editing, so perhaps there's something there after all...
Okay, so you didn't even bother to take a few seconds to step through the video to see if there was anything other than the talking head (I'll help you out a bit, there is).
Either way, it's a step-by-step walk through of the ideas of the original article that introduced Conway's Law and a deeper inspection into ideas about _why_ it might be that way.
If that's not enough then my apologies but I haven't yet found an equivalent article that goes through the ideas in the same way but in the kind of information-dense format that I assume would help you hit your daily macros.
(I didn't downvote you btw)
But anyways, I did step through. And even in the section he should make his point, he couldn't. I rage quit this stuff.
Don't take it personally, you might have found great insight from it. But if you want to see my POV: I can scan, like most humans, a large text in seconds, processing it with a massive parallel network. When I find an anchor of interest, I can scan around for more context. I can go back to sections, to read it deeper.
A video is Gigabyte of download to convey a few bytes of information, dripping slowly over the span of an hour. A text is a few kilobytes, downloaded in an instant, and then it takes a few seconds to scan it, a minute to read some things deeper, and then I can decide if it is worth it to mine deeper. Even then the additional cost will be like 3 minutes.
But, to be fair, I know quite some people that do not have this ability. They struggle to dissect a text, to chop it apart and quickly pull out the information. But that could also be an issue of not being able to give full bandwidth to an information source. Some people can't focus on a text, but like to listen to books while driving for example.
That's completely fair, and I actually completely agree about the information density thing. I honestly prefer well-written concise documentation over tutorial videos for this same reason, however I've not yet found the equivalent text form of this video yet (automatic transcription aside) so it's really the only example I've got that seems to extract some of the most salient points from Conway's paper and puts forward an idea as to _why_ this phenonemon is. Perhaps a blog post in the making.
Man, I don't know what kind of world you live in, but an hour long video is a little too much to swallow when reading HN comments. I even gave it a chance, but had to close the tab after the guy just prepared you for something and then steered away to explain "what is a law". That's absurd.
Self-regulating in a way that is designed to favour smaller independent groups with a more complete understanding and ownership of whatever <thing> that team does?
Even putting aside the ethical issues, it's rare that I want to copy/paste code that I find into my own project without doing a thorough review of it. Typically if I'm working off some example I've found, I will hand-type it in my project's established coding style and add comments to clarify things that are not obvious to me in that moment. With an LLM's output, I think I would have to adopt a similar workflow, and right now that feels slower than just solving the problem myself. I already have the project's domain in my mental map, and explaining it to the agent is tedious and a time waste.
I think this is often overlooked, because on the one hand it's really impressive what the predictive model can sometimes do. Maybe it's super handy as an autocomplete, or an exploration, or for rapidly building a prototype? But for real codebases, the code itself isn't the important part. What matters is documenting the business logic and setting it up for efficient maintenance by all stakeholders in the project. That's the actual task, right there. I spend more time writing documentation and unit tests to validate that business logic than I do actually writing the code that will pass those tests, and a lot of that time is specifically spent coordinating with my peers to make sure I understand those requirements, that they were specified correctly, that the customer will be satisfied with the solution... all stuff an LLM isn't really able to replace.
Thanks for sharing this beautiful essay which I have never come across. The essay and its citations are thought-provoking reading.
IMO, LLMs of today are not capable of building theories (https://news.ycombinator.com/item?id=44427757#44435126). And, if we view programming as theory building, then LLMs are really not capable of coding. They will remain useful tools.
LLMS are great at generating scaffolding and boilerplate code which then I can iterate upon. I'm not going write
describe User do ...
it ".."
for the thousand time..
or write the controller files with CRUD actions..
LLMS can do these. I can then review the code, improve it and go from there.
They are also very useful for brain storming ideas, I treat it as a better google search. If I'm stuck trying to model my data, I can ask it questions and it gives me recommendations. I can then think about it and come up with an approach that makes sense.
I also noticed that LLMs really lack basic comprehension. For example, no matter how many times you provide the Schema file for it (or a part of it) , it still doesn't understand that a column doesn't exist on a model and will try to shove it in the suggested code.. very annoying.
All that being said, I have an issue with "vibe coding".. this is where the chaos happens as you blindly copy and paste everything and git push goodbye
We need to invent better languages and frameworks. Boilerplate code should be extremely minimal in the first place, but it appears to have exploded in the last decade.
I don't mind when other programmers use AI, and use it myself. What I mind is the abdication of responsibility for the code or result. I don't think that we should be issuing a disclaimer when we use AI any more than when I used grep to do the log search. If we use it, we own the result of it as a tool and need to treat it as such. Extra important for generated code.