Hacker Newsnew | past | comments | ask | show | jobs | submit | Toutouxc's commentslogin

I am constantly seeing this thing do most of my work (which is good actually, I don't enjoy typing code), but requiring my constant supervision and frequent intervention and always trying to sneak in subtle bugs or weird architectural decisions that, I feel with every bone in my body, would bite me in the ass later. I see JS developers with little experience and zero CS or SWE education rave about how LLMs are so much better than us in every way, when the hardest thing they've ever written was bubble sort. I'm not even freaking about my career, I'm freaking about how much today's "almost good" LLMs can empower incompetence and how much damage that could cause to systems that I either use or work on.

I agree with you on all of it.

But _what if_ they work out all of that in the next 2 years and it stops needing constant supervision and intervention? Then what?


It’s literally not possible. It has nothing to do with intelligence. A perfectly intelligent AI still can’t read minds. 1000 people give the same prompt and want 1000 different things. Of course it will need supervision and intervention.

We can synthesize answers to questions more easily, yes. We can make better use of extensive test suites, yes. We cannot give 1000 different correct answers to the same prompt. We cannot read minds.


Can you? Read minds, I mean.

If the answer is "yes"? Then, yeah, AI is not coming for you. We can make LLMs multimodal, teach them to listen to audio or view images, but we have no idea how to give them ESP modalities like mind reading.

If the answer is "no"? Then what makes you think that your inability to read minds beats that of an LLM?


This is kind of the root of the issue. Humans are mystical beings with invisible sensibilities. Many of our thoughts come from a spiritual plane, not from our own brains, and we are all connected in ways most of us don't fully understand. In short, yes I can read minds, and so can everybody else.

Today's LLMs are fundamentally the same as any other machine we've built and there is no reason to think it has mystical sensibilities.

We really need to start making a differentiation between "intelligence" and "relevance". The AI can be perfectly intelligent, but without input from humans, it has no connection to our Zeitgeist, no source material. Smart people can be stupid, too, which means they are intelligent but disconnected from society. They make smart but irrelevant decisions just like AI models always will.

AI is like an artificial brain, and a good one, but humans have more to our intelligence than brains. AI is just a brain and we are more.


I'm sorry for the low effort comment, but. Lol. Lmao.

If you have an AI that's the equivalent of a senior software developer you essentially have AGI. In that case the entire world will fundamentally change. I don't understand why people keep bringing up software development specifically as something that will be automated, ignoring the implications for all white collar work (and the world in general).

Then who else is still holding a job if a tool like that is available? Manually working people, for the few months or years before robotics development fueled by cheap human-level LLMs catches up?

If We Build It We Will All Die

Yes and look how far we've come in 4 years. If programming has another 4 that's all it has.

I'm just not sure who will end up employed. The near state is obviously jira driven development where agents just pick up tasks from jira, etc. But will that mean the PMs go and we have a technical PM, or will we be the ones binned? Probably for most SMEs it'll just be maybe 1 PM and 2 or so technical PMs churning out tickets.

But whatever. It's the trajectory you should be looking at.


Have you ever thought about the fact that 2 years ago AI wasn't even good enough to write code. Now it's good enough.

Right now you state the current problem is: "requiring my constant supervision and frequent intervention and always trying to sneak in subtle bugs or weird architectural decisions"

But in 2 years that could be gone too, given the objective and literal trendline. So I actually don't see how you can hold this opinion: "I'm not even freaking about my career, I'm freaking about how much today's "almost good" LLMs can empower incompetence and how much damage that could cause to systems that I either use or work on." when all logic points away from it.

We need to be worried, LLMs are only getting better.


That's easy. When LLMs are good enough to fully replace me and my role in the society (kind of above-average smart, well-read guy with university education and solid knowledge of many topics, basically like most people here) without any downsides, and without any escape route for me, we'll probably already be at the brink of a societal collapse and that's something I can't really prepare for or even change.

All evidence points to the world changing. You're not worrying because worrying doesn't solve anything. Valid.

More people need to be upfront about this reasoning. Instead of building irrational scaffolds saying AI is not a threat. AI is a threat, THAT is the only rational conclusion. Give the real reason why you're not worried.


> Clearly tapping a letter “taps” a different letter

My iPad Mini 6 sometimes gets into this state, especially after deleting something, when tapping one of the keys in the lower right corner becomes completely impossible, it always registers as this different key (I don't have the iPad nearby to check which one), and it stays broken like this until I press a few other keys. It's incredibly frustrating and it's been there since day 1.


The keyboard actually does this all the time, and many assume they are the problem (making typos, etc.). A few have recorded videos to show what is actually happening and it's wild. If I had a link handy I'd share it. The user directly taps on a letter, and the system picks what it thinks the user actually meant, even when the key hit was dead on.

Turning off slide to type in settings improves the situation, however it still happens.


Note that both very high or very low aperture settings also bring their own optical issues. At very low values (big hole) you’re getting hurt by different aberrations (essentially too many paths the same rays can take to the sensor) and at very high values you’re getting hurt by diffraction. At the low end, it’s good to go a little higher than the lens advertises, and at the high end anything over F13-F18 (depending on the gear) is usually quite bad.

I think they meant that you can pad your regular SD card with random data and leave just enough space for a few photos.

Or even use a computer to custom-format the card with a hilariously small partition (e.g. 128 MB on a 128 GB card).

ahh that makes a lot more sense!

> image "noise" or "grain" that is introduced into a picture as you increase the ISO

Not this absolute shit again. This is not how photography works or how physics actually work. Image noise does NOT come from high ISO, it comes from low exposure (not enough light hitting the sensor). ISO is just a multiplier between a number of photons and the brigthness of a pixel in your photo. The implementation of the multiplier is (usually) half-analog and half-digital, but it's still just a multiplier. If you keep the exposure the same, then changing the ISO on a digital camera will NOT introduce any more noise (except for at the extremes of the range, where, for example, analog readout noise may play a role).

This "simulator" artificially adds noise based on the ISO value, as you can easily discover: Set your shutter to 1/500 and your aperture to F8, then switch between ISO 50 and ISO 1600 and look at the letters on the bulb. ISO 50, dark but perfectly readable. ISO 1600, garbled mess. Since the amount of light hitting the simulated sensor stays the same, you should be seeing slightly LESS noise at ISO 1600 (better signal to noise ratio than at low ISO), not more.

edit: To add something genuinely useful: Use whatever mode suits you (manual, Av, Tv) and just use Auto ISO. Expose for the artistic intent and get as much light in as possible (i.e. use a slower shutter speed unless you need to go faster, use a wider aperture unless you need a narrower one). That’s the light that you have, period. Let the camera choose a multiplier (ISO) that will result in a sane brightness range in your JPEG or RAW (you’ll tweak that anyway in post). If the photo ends up too noisy, sorry but there was not enough light.

ISO is an almost useless concept carried over from film cameras where you had to choose, buy and load your brightness multiplier into the camera. Digital cameras can do that on the fly and there’s usually no reason not to let them. (If you can come up with a reason, you probably don’t need this explanation)


> If you keep the exposure the same, then changing the ISO on a digital camera will NOT introduce any more noise

So does this mean that changin the ISO directly on my camera, or in DarkTable/whatever at post-proc time is virtually the same?


Yes, as the sibling post says, it's effectively the same in most cameras and it's exactly the same in certain cameras (not many). Unless you, of course, actively fuck up by shooting a very low-exposure (dark) shot with low ISO (then you lose precision, because your analog measurements get quantified into small integers, that are also close to the noise floor), or by shooting a very bright shot with high ISO (where your highlights get multiplied right out of the range of your output format). If you don't actively try and fuck up the shot (AND you shoot RAW), you can make pretty wild changes in post and the data will be there.

That's just one more reason not to be afraid of auto ISO. The camera will choose something sane and you'll have ample room on both sides to get the image you wanted.


If you have an ISO-invariant camera, then yes - the final image would look the same whether you shot at low ISO and raised it in post versus shooting at a high ISO and doing no further editing. You can try it yourself. Or you can read the numerous reviewers who have already done that in the past decade, such as DPReview.

> Image noise does NOT come from high ISO, it comes from low exposure [...] changing the ISO on a digital camera will NOT introduce any more noise (except for at the extremes of the range, where, for example, analog readout noise may play a role).

Sounds like you're saying that setting higher ISO does cause noise, but as long as you don't go too high you won't really notice the difference?


No. What they're saying is ISO multiplies brightness, essentially exasperating differences. Roughly, ISO 200 is 2x gain and so on. So if you have one pixel with a brightness value of 1, and the pixel to the left has a brightness of 5, and an ISO of 500, then it becomes brightness 5 and 25 respectively. Oversimplification.

Agreed. In other words, ISO is not exposure. Exposure is purely about how much light arrives on the sensor - which is a combination of scene illumination, object reflectivity, relative aperture, and shutter speed. ISO only plays a part in controlling how bright the output image is.

ISO does not create noise. It amplifies (accentuates) the noise that is already there.

Not sure what you mean, just have a coding agent (e.g. Claude Code) and talk to it.

Looks like Machinarium. I like it.


What a beautiful and nostalgic game that was. I’ve never had a game hit me like that since!


I played it with my wife on the couch over many winters evenings, and then ten years later played it with my daughter. Good times. Reminded me of playing Sierra games as a kid.


Same here, though no kids yet.

I bought the soundtrack on vinyl (by Tomáš Dvořák, aka Floex), then got a record player, aaaand ended up accumulating a ton of records since then.

I still play that record though, it never gets old.

The other game that we enjoyed in a very similar way is Primordia [1]. Named our first cat Crispin afterwards.

You will probably enjoy Boxville [2]; it's very much Machinarium-inspired. Its sequel, Boxville 2,came out recently, so there's more in store.

It's Ukrainian-made (Machinarium is Czech), so the devs share a gritty post-communist childhood to draw the inspiration from.

[1] https://primordia-game.com/log.html

[2] https://store.steampowered.com/developer/triomatica


I also love the soundtrack so much and have listened to it thousands of times, especially By The Wall, my favorite song. PS: Thanks for posting the composer’s solo name, Floex, because there were (are?) two people with exactly the same name working at Amanita Design, bizarrely!


There’s also an album called Machinarium Remixed, which is the original soundtrack made into slightly more energetic/EDM tracks. Really good stuff.


I especially love "Mr Handagote" from the soundtrack, absolute masterpiece which gives me goosebumps every time.


I really enjoyed "Samorost 3" by the same developers. Machinarium still takes the cake though.


Don't miss out on their Botanicula too!


Yeah, it's really a masterpiece. It's utterly fantastic.


What kind of car do you drive that doesn't have one?


An EV with a heat pump. I know literally there is a heat exchange/radiator, but there is not a separate radiator system with its own fluids and pumps.


You don’t get to decide whether a radiator is a radiator just because the coolant can internally shuffle heat to the A/C. I’m assuming that you drive a Tesla, in which case your car still has a big fat low temperature radiator. If you’re driving virtually any other EV on the market, it still has a big fat low temperature radiator, or even multiple.


Literally any ev?


No. My EV, for example literally has servo-controlled shutters that route fresh air to the radiator when needed.


My desktop is a gaming-only machine, it’s still on Windows 10 and it will probably stay on Windows 10 until Steam stops working.


Can you please provide an example?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: