Hacker Newsnew | past | comments | ask | show | jobs | submit | drusepth's commentslogin

Hard to find the signal in the noise and know what stories I should even read to get a sense of baseline quality; partially because that's just a hard problem inherent to floods of any content, but also because the recommendation system seems to lack enough data (and also might be weighting the wrong things, e.g. the rank #1 story is also the lowest-rated...).

A very cool idea in theory and something very hard to pull off, but I think in order to get the data you need on how readable each story is you'll need to work on presentation and recommendation so those don't distract from what you're actually testing.


Thanks for the feedback - looking at the rest of the comments, I definitely agree it seems to be a common theme. Will do better to fix those issues so there's less noise.

Right now I paste screenshots of AWS/Azure/GCP into Claude and ask it questions on how to navigate around / what to do / how to set things up. This seems like a much better experience solely to not have to deal with the weird mac screenshot UX.

We human writers love emdashes also ;)


Why are you worried about that world? Is it because you expect science to progress too fast, or too slow?


Too fast. It's already coding too fast for us to follow, and from what I hear, it's doing incredible work in drug discovery. I don't see any barrier to it getting faster and faster, and with proper testing and tooling, getting more and more reliable, until the role that humans play in scientific advancement becomes at best akin to that of managers of sports teams.


In what applications is ⌘Y Undo and not ⌘Z? Is ⌘Y just a redundant alternative?


Ctrl-Y is typically Redo, not Undo. Maybe that's what they meant.

Apparently on Macs it's usually Command-Shift-Z?


Does "as it is currently used" include what this apparently is (brainstorming, initial research, collaboration, text formatting, sharing ideas, etc)?


I'm not sure this is true... we heavily use Gemini for text and image generation in constrained life simulation games and even then we've seen a pretty consistent ~10-15% rejection rate, typically on innocuous stuff like characters flirting, dying, doing science (images of mixing chemicals are particularly notorious!), touching grass (presumably because of the "touching" keyword...?), etc. For the more adult stuff we technically support (violence, closed-door hookups, etc) the rejection rate may as well be 100%.

Would be very happy to see a source proving otherwise though; this has been a struggle to solve!


Why would you buy a 5-year-old iPhone for the same price you can get a new Android with comparable specs though? If I'm gonna spend 2-3 hundred on a phone, I'd like it to last at least a couple more years. Regardless of OS, you're more likely to get that on a new phone vs any phone 5+ years old.


If Apple's still selling it, they'll almost certainly support it at least as long as an above-average Android manufacturer.

The current iOS supports things back to iPhone 11 and the SE2, so you can expect the SE3 and iPhone 13 to get at least two more years of support (no real guarantees, but they're still selling new stock of both, and they have a reputation to protect).


> Solve enough problems relying on AI writing the code as a black box, and over time your grasp of coding will worsen, and you wont be undestanding what the AI should be doing or what it is doing wrong - not even at the architectural level, except in broad strokes.

Using AI myself _and_ managing teams almost exclusively using AI has made this point clear: you shouldn't rely on it as a black box. You can rely on it to write the code, but (for now at least) you should still be deeply involved in the "problem solving" (that is, deciding _how_ to fix the problem).

A good rule of thumb that has worked well for me is to spend at least 20 min refining agent plans for every ~5 min of actual agent dev time. YMMV based on plan scope (obviously this doesn't apply to small fixes, and applies even moreso to larger scopes).


What I find the most "enlightening" and also frightening thing is that I see people that I've worked with for quite some time and who I respected for their knowledge and abilities have started spewing AI nonsense and are switching off their brains.

It's one thing to use AI like you might use a junior dev that does your bidding or rubber duck. It's a whole other ballgame, if you just copy and paste whatever it says as truth.

And regarding that it obviously doesn't apply to small fixes: Oh yes it does! So many times the AI has tried to "cheat" its way out of a situation it's not even funny any longer (compare with yesterday's post about Anthropic's original take home test in which they themselves warn you not to just use AI to solve this as it likes to try and cheat, like just enabling more than one core). It's done this enough times that sometimes I don't trust Claude with an answer I don't fully understand myself well enough yet and dismiss a correct assessment it made as "yet another piece of AI BS".


> if you just copy and paste whatever it says as truth.

It's more difficult than ever, because Google is basically broken and knowledge is shared much less these days, just look at stack overflow


"Ads are always separate and clearly labeled."

I've heard this before...


These promises are worth nothing without a contract that a consumer can sue them for violating. And hell will freeze over before megacorps offer consumers contracts that bind themselves to that degree.


As a consumer, I'd rather just have regulation that mandates that ads be separate and clearly labelled.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: