Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway20174's commentslogin

When they coined the term "influencer" it was a perfect definition for what that is.

It's not a question about the one who cannot influence the 7.9B. The question is disparity, which is like income disparity.

What happens when fewer and fewer people influence greater and greater numbers. Understanding the risk in that.


It's tough to for me to judge cause I've been programming for 30 years maybe I'm underestimating how hard it is, but I look at learning a new language very different that trying to understand the graduate level CS work I've seen at a top STEM school.

Git, shell, basics.. even simple python if you have any at all programming experience - not nearly as hard as what they're teaching in the class.

Most of the time something like that like learning latex or git basics.. they'll say.. you'll pick up what you need. They're not gonna spend 12 weeks on those subjects they aren't hard enough.


Discrete tools are fairly easy. On the other hand, I think a lot of people here would laugh at the "text book" for the introductory FORTRAN course I took at said school.

Of course, you were struggling with fairly primitive tools at the time as well. Made a typo? Time to beg the grad students running the facility for some more compute cycles.

Although it's out of print I don't immediately see a full copy online. https://www2.seas.gwu.edu/~kaufman1/FortranColoringBook/Colo...


Humans have a proven history of re-inventing economic systems, so if AI ends up thinking better than we do (yet unproven this is possible), then we should have superior future systems.

But the question is a system optimized for what? That emphasizes huge rewards for the few, and that requires the poverty of some (or many). Or a more fair system. Not different from the challenges of today.

I'm skeptical even a very intelligent machine will change the landscape of our dificult decisions, but will accelerate which direction we decide (or is decided for us), that we go.


AI more of a force muliplier than a replacement. If you rated programmers from 0 to 100. AI can take you from 0 to 80, but can't take you from 98 to 99.

I'd love to record these AI CEOs statements about what's going to happen in the next 24 months and look back at that time -- see how "transformed" the world is then.


My guess is more if the same (i.e. mostly crap), but faster.

We still create software largely the same as we did in the 1980s. Developers sitting at keyboards writing code, line by line. This despite decades of research and countless attempts at "expert systems", "software through pictures" and endless attempts at generating code from various types of models or flowcharts or with different development methodologies or management.

LLMs are like scaffolding on steroids, but aren't fundamentally transforming the process. Developers still need the mental model of what they are building and need to be able to verify that they have actually built it.


> We still create software largely the same as we did in the 1980s. Developers sitting at keyboards writing code, line by line.

That's because the single dimension of code fits how the computer works and we can project any higher order dimension on it. If you go with 2 dimensions like pictures, it no longer fits the computer model, and everything becomes awkward with the higher dimensions of the domain. The only good 2d representation is the grid (spreadsheet, relation dbs, parallel programming..) and even they can be unwieldy.

The mental model is the higher dimension structure, that we project on line of codes. Having LLM generating it is like throwing painting on canvas and hoping for the Mona Lisa.


I'm stating the obvious, but these things tend to go either way. We either grossly overestimate the impact, or grossly underestimate it.

In the case of the internet, it ended up going both ways. We overestimated it in the near term and underestimated its impact in the long term.

They could very well be right. I don't think they are. But I've also never seen anything that can scale quite like AI.


Fully self-driving cars have been just 2 years away for what, 10 years now?


In my recent class many people were using AI and I know people don’t think this way, but just totally defeats the purpose of taking the class.


There is something that I find very often gets lost, not in this comment but in the general conversations.

There are reasons to study CompSci that have nothing to do with trying to get a job, or make yourself a better worker. Namely, it's fun!

You can be a lifetime student of it and can be very rewarding. Both the practical side (programming) and delving into the mathematical underpinnings (theory) and history of computation.

News articles where people tell young people, "don't study computer science" don't get it.


The catch is that when AI handles 95% or 99% of a task, people say great, don't need humans. 99% is great.

But when that last 1% breaks and AI can’t fix it. That’s where you need the humans.


By then the price will have increased quite a bit; if you want me to fix your AI crap, you're going to pay until it hurts.


The refinance is a real issue, but it doesn't matter too much cause the Fed can (and will) always intervene. They can soak up any supply to push the 10 year down.

Same reason why it doesn't matter if China etc.. wants our treasuries. Fed is always there to buy them.


I'm not an economist but I believe this is only the case so long as USD is the global reserve currency. If that ceases to be the case, the Fed has a lot less power to print money and inflate debt away. I believe this is the case Ray Dalio has been making for some time now:

https://www.youtube.com/watch?v=xguam0TKMw8


Every country has the ability to print money and buy its own debt. Japan has printed billions and bought equities.

Being a global reserve currency only increases the extent of that printing, not the ability itself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: