Hacker Newsnew | past | comments | ask | show | jobs | submit | beej71's commentslogin

> I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient.

IMNSHO as an instructor, you believe correctly. I tell my students how and why to use LLMs in their learning journey. It's a massively powerful learning accelerator when used properly.

Curricula have to be modified significantly for this to work.

I also tell them, without mincing words, how fucked they will be if they use it incorrectly. :)


> powerful learning accelerator

You got any data on that? Because it's a bold claim that runs counter to all results I've seen so far. For example, this paper[^1] which is introduced in this blog post: https://theconversation.com/learning-with-ai-falls-short-com...

[^1]: https://doi.org/10.1093/pnasnexus/pgaf316


Only my own two eyes and my own learning experience. The fact is students will use LLMs no matter what you say. So any blanket "it's bad/good" results are not actionable.

But if you told me every student got access to a 1-on-1 tutor, I'd say that was a win (and there are studies to back that up). And that's one thing LLMs can do.

Of course, just asking your tutor to do the work for you is incredibly harmful. And that's something LLMs can do, as well.

Would you like to have someone 24-7 who can give you a code review? Now you can. Hell yeah, that's beneficial.

How about when you're stuck on a coding problem for 30 minutes and you want a hint? You already did a bunch of hard work and it's time to get unstuck.

LLMs can be great. They can also be horrible. The last thing I wrote in Rust I could have learned nothing by using LLMs. It would have take me a lot less time to get the program written! But that's not what I did. I painstakingly used it to explore all the avenues I did not understand and I gained a huge amount of knowledge writing my little 350 line program.


I don't think that study supports your assertion.

Parent is saying that AI tools can be useful in structured learning environments (i.e. curriculum and teacher-driven).

The study you linked is talking about unstructured research (i.e. participants decide how to use it and when they're done).


You can no true Scotsman it, but that study is a structured task. It's possible to generate an ever-more structured tutorial, but that's asking ever more more from teachers. And to what end? Why should they do that? Where's the data suggesting it's worth the trouble? And cui bono?

Students have had access to modern LLMs for years now, which is plenty long to spin up and read out a study...


To quote the article:

"To be clear, we do not believe the solution to these issues is to avoid using LLMs, especially given the undeniable benefits they offer in many contexts. Rather, our message is that people simply need to become smarter or more strategic users of LLMs – which starts by understanding the domains wherein LLMs are beneficial versus harmful to their goals."

And that is also our goal as instructors.

I agree with that study when using an LLM for search. But there's more to life than search.

The best argument I have to why we should not ban LLMs in school is this: students will use it anyway and they will harm themselves. That is reason enough.

So the question becomes, "What do instructors do with LLMs in school so the LLM's effect is at least neutral?"

And this is where we're still figuring it out. And in my experience, there are things we can do to get there, and then some.


Comparing that study to how any classroom works, from kindergarten through high school, is ridiculous.

What grade school classes have you ever been in where the teacher said "Okay, get to it" and then ignored the class until the task was completed?

I'm not saying it's not a Scotsman: I'm saying you grabbed an orange in your rush to refute apples.


Because if an ICE agent violates your constitutional rights, you have zero recourse. There is no freedom.

If you're not afraid of that, you got your head in the sand.


In my experience (programmer since 1983), it's massively faster to leverage an LLM and obtain quality code when working with technology that I'm proficient in.

But when I don't have expertise, it's the same speed or even slower. The better I am at something, the faster the LLM coding goes.

I'm still trying to get better at Rust, and I'm past break-even now. So I could use LLMs for a speed boost. But I still hand-write all my code because I'm still gaining expertise. (Here I lean into LLMs in a student capacity, which is different.)

Related to this, I often ask LLMs for code reviews. The number of suggestions it makes that I think are good is inversely proportional to the experience I have with the particular tech used. The ability to discard bad suggestions is valuable.

This is why I think bring an excellent dev with the fundamentals is still important—critical, even—when coding with LLMs. If I were still in a hiring role, I'd hire people with good dev skills over people with poor dev skills every time, regardless of how adept they were at prompting.


Lots of things aren't fearsome until they're pointed at you.

> at least the next 5 years

That's not much of a flex. The people who are worried about China taking the lead are looking at velocity and acceleration, not position.


Yes, we did. And that discussion had a most definite conclusion.

I think using AI for tech documentation is great for people who don't really give a shit about their tech documentation. If you were going to half-ass it anyway, you can save a lot of money half-assing it with AI.

> Is there any surprise that there's a dearth of armed citizens ready to stand up for them?

Forget the left. Why don't they stand up for themselves?


How does this apply to Trump and mar-a-lago, then? Genuine question.

Trump was requested to return the classified documents several times. He said he returned them all, then said he didn't need to return them all, then said he actually declassified them with his mind.

And yeah, it's not a great situation with terrible optics. It would've been better for everyone if he just didn't steal the classified documents to begin with or, once requested, he returned them.


I guess my question, given the GP's assertion that private citizens holding classified information isn't a crime, how does the law apply to Trump here?

Is there another law saying that government officials need to return all classified documents that would apply to him and not to the reporter?



Thank you! Cost me three points, but I really wanted this information and was having trouble finding it. :)

It doesn't. Different rules de facto for the ruling class and the peons. That's one of the failures in American society Trump has been exploiting his whole life.

Seems like a decent balance to me. They note that there's no substitute for experiential learning. The harder you work, the more you get out of it. But there's a balance to be struck there with time spent.

What I do worry about is that all senior developers got that experiential education working hard, and they're applying it to their AI usage. How are juniors going to get that same education?


This is also what I often wonder.

Imho, AI is a multiplier and it compounds more as seniority grows and you know how to leverage it as a tool.

But in case of juniors, what does it compound exactly?

Sure, I see juniors being more independent and productive. But I also see them being stuck with little growth. Few years ago in an year, they would've tremendously grow at least on the technical side, what do they get better at now? Glueing APIs via prompting while never even getting intimate with the coding aspect?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: