Hacker Newsnew | past | comments | ask | show | jobs | submit | sublinear's commentslogin

You'd be surprised how far a lot of places got just using git notes and jenkins for a very long time.

I would argue the real problem with clickpads is that they're a cost cutting measure (even the ones on macbooks). People were fed up with accidental taps, and manufacturers didn't want to pay more for a better trackpad.

The other evil part about trackpads is drivers that don't let you turn off the pointer acceleration because to do so would reveal how jittery the sensors really are.

This is why even now and even on the highest end laptops you must still either slow down your fingers or put up with endless overshooting/undershooting of the cursor movement. I deeply despise how "heavy" and slow the cursor feels on all macbooks for at least the past decade. This is the real reason why the fucking clickpad has to be so massive!


> There is something “special” about Thinkpads. It’s rare for such a long tradition of design and engineering to be allowed to continue inside mega-companies...

Not... really? They're mega-companies because of deliberate choices like that. Those choices are not the only way to get there, but they've found choices that work. I get that it's trendy to shit on successful businesses and be toxic about the crumbs you enjoy, but it's plain ignorant to call these things "accidents" or "coincidences" and nobody is forcing you to eat crumbs.

The later comparison to Apple is just as strange. Consumers aren't so particular and will buy anything at any price for almost no reason but marketing. All those people hyping Thinkpads may not be the original manufacturer, but they are selling them on eBay and effectively the vendors. They hype old Macs just the same.


> the real job is not writing the code, or even building the system — it is acquiring the necessary knowledge to build the system.

Not only very true, but the grammar will trigger those who insist on forcing the "that's written by AI" meme. I love it.


> Dwyer said that he started using AI in his art around 2017/2018 but had been making art without the use of AI prior to this ... Dwyer explained that he himself fell into AI psychosis

Isn't this what they said about books/authorship and art/artistry in the past? I'm not really understanding what this "new" psychosis is about. It seems to me that these people are already at risk for developing psychosis.


> I did this purely for fun and learning, I don’t think this is going to be the best way for generating QR codes in production.

And yet, someone somewhere is under pressure to "just get it done" and generate QR codes from URLs in a database.


You do realize the developers only "know the code best" because they're busy writing code all day, right?

Nobody wants to be held more accountable with less control over the result.

The moment you tell the devs to focus on working with AI is the moment their guess is as good as anyone else's what the hell is going on. You're not going to squeeze more productivity this way.


Yes, but the majority opinion is "nothing ever happens" by default for everything, not just tech outages. It's not about sounding smart, but getting ahead of grifters and clout chasers.

Yep I think it started as a reaction against "it's happening!" types and it is a lot less wrong, but it's still wrong.

The truth is things do occasionally happen and we should be prepared, even if most of the time they don't


I otherwise agree with what you're saying, but I think the ratio of conscientious people has fluctuated over time across all generations. It has more to do with what year it is than how old they are.

I think the concern expressed as "impossible" is whether it can ever do those things "flawlessly" because that's what we actually need from its output. Otherwise a more experienced human is forced to do double work figuring out where it's wrong and then fixing it.

This is not a lofty goal. It's what we always expect from a competent human regardless of the number of passes it takes them. This is not what we get from LLMs in the same amount time it takes a human to do the work unassisted. If it's impossible then there is no amount of time that would ever get this result from this type of AI. This matters because it means the human is forced to still be in the loop, not saving time, and forced to work harder than just not using it.

I don't mean "flawless" in the sense that there cannot be improvements. I mean that the result should be what was expected for all possible inputs, and when inspected for bugs there are reasonable and subtle technical misunderstandings at the root of them (true bugs that are possibly undocumented or undefined behavior) and not a mess of additional linguistic ones or misuse. This is the stronger definition of what people mean by "hallucination", and it is absolutely not fixed and there has been no progress made on it either. No amount of prompting or prayer can work around it.

This game of AI whack-a-mole really is a waste of time in so many cases. I would not bet on statistical models being anything more than what they are.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: