Hacker Newsnew | past | comments | ask | show | jobs | submit | casebash's commentslogin

Oh, they're actually the bad guys, just folks haven't thought far enough ahead to realise it yet.


> bad guys

You imply there are some good guys.

What company?


There are plenty of companies that don't immediately qualify as "the bad guys".

For instance, of all companies I've interviewed with or have friends working at that developed tech, some companies build and sell furnitures. Some are your electricity provider or transporter. Some are building inventory management systems for hospitals and drug stores. Some develop a content management system for medical dictionnary. The list is long.

The overwhelming majority of companies are pretty harmless and ethically mundane. They may still get involved in bad practice, but that's not inherent to their business. The hot tech companies may be paying more (blood money if you ask me), but you have other options.


Depends. Does your definition of “good” mean “perfect”? If so, cynical remarks like “no one is good” would be totally correct.


Signal, Proton, Ecosia, DuckDuckGo, Mastodon, Deepseek.


There are some less bad.

But, can't think of one off hand. Maybe Toys-R-Us? Ooops gone. Radio Shack? Ooops, also gone.

On the scale of Bad/Profit, Nice dies out.


Google circa 2005?

Twitter circa 2012?

In 2025? Nobody, I don't think. Even Mozilla is turning into the bad guys these days.


Signal, Mastodon


Bluesky, Kagi


In my head at least, Bluesky are way closer to "the bad guys'. I don't trust them at all, pretty sure that in spite of what they say, they're going to do the same sort of rug pull that Google did with their "do no evil" assurances.


Funnily enough, I would actually flip it to say this about Kagi. With Bluesky, everything they have built is available to continue to be useful for people completely independent of what the folks over at Bluesky decide to do. There is no vendor lock in at all.

Kagi, on the other hand, has released none of their technology publicly, meaning they have full power to boil the frog, with no actual assurance that their technology will be useful regardless of their future actions.


Google was bad the moment it chose its business model. See The Age of Surveillance Capitalism for details. Admittedly there was a nice period after it chose its model when it seemed good because it was building useful tools and hadn't yet accrued sufficient power / market share for its badness to manifest overtly as harm in the world.


DeepSeek et al.

Obv


You are searching in the wrong place if you look for "good guys" among commercial companies.


OK, lay it on us.


It’s not unreasonable given the mountain of evidence of their past behaviour to just assume they are always the “bad guy”.


I would normally agree, but we're instantially talking about the company that made Pytorch and played an instrumental role in proliferating usable offline LLMs.

If you can make that algebra add up to "bad guy" then be my guest.


It seems like you're claiming that Pytorch + an open-weight LLM > everything on this wiki page, especially the anchored section https://en.wikipedia.org/wiki/Facebook_content_management_co...


I am. I genuinely don't understand how Meta's LLM contributions have anything to do with Myanmar.

It's like telling an iPhone user that iCloud isn't trustworthy because of the Foxconn suicide nets. It's basically the definition of a non-sequitur.


Just read Careless People.


I wouldn't call mass piracy [0], for their own competitive gain, to be a "good" act. Especially when it seems they know they were doing the wrong thing - and that they know that the copyright complaints have grounds.

> The problem is that people don’t realize that if we license one single book, we won’t be able to lean into fair use strategy.

[0] https://www.theatlantic.com/technology/archive/2025/03/libge...


Come on... Is it still necessary to remind everyone how evil meta is ? The only reason they released "open source" models was to annoy the competition. They latest stunts: https://futurism.com/meta-sketchy-training-ai-private-photos


don't call them open source when they're not. it's shared model.


It's just how they call them... Hence the quotes.


They're involved in genocide and enables near-global tyranny through their surveillance and manipulation. There are no excuses for working for or otherwise enabling them.


[flagged]


Well at least they're doing it For Great Justice then.


This is an instance of bad guys fighting bad guys.


Authors of original paper: Samuel G. B. Johnson, Amir-Hossein Karimi, Yoshua Bengio, Nick Chater, Tobias Gerstenberg, Kate Larson, Sydney Levine, Melanie Mitchell, Iyad Rahwan, Bernhard Schölkopf, Igor Grossmann


I'll copy my LinkedIn comment:

"Well done to the UK for not signing the fully compromised Statement on Inclusive and Sustainable Artificial Intelligence for the People and the Planet. Australia shouldn't have signed this statement either given how France intentionally derailed attempts to build a global consensus on how we can develop AI safely.

For those who lack context, the UK organised the AI Safety Summit at Bletchley Park in November 2023 to allow countries to discuss how advanced AI technologies can be discussed safely. There was a mini-conference in Korea, France was given the opportunity to organise the next big conference, a trust they immediately betrayed by changing the event to be about promoting investment in their AI industry.

They renamed the summit to the AI Action Summit and relegated safety from the sole focus to being just one of five focus areas, but not even one of five equally important focus areas, but one that seems to have been purposefully minimized even further.

Within the conference statement safety was reduced to a single paragraph that undermines safety if anything:

“Harnessing the benefits of AI technologies to support our economies and societies depends on advancing Trust and Safety. We commend the role of the Bletchley Park AI Safety Summit and Seoul Summits that have been essential in progressing international cooperation on AI safety and we note the voluntary commitments launched there. We will keep addressing the risks of AI to information integrity and continue the work on AI transparency.”

Let’s break it down: • First, safety is being framed as “trust and safety”. These are not the same things. The word trust appearing first is not as innocent as it appears: trust is the primary goal and safety is secondary to this. This is a very commercial perspective, if people trust your product you can trick them into buying it, even if it isn't actually safe. • Second, trust and safety are not framed as values important in and of themselves, but as subordinate to realising the benefits of these technologies, primarily the "economic benefits". While the development of advanced AI technologies could theoretically create a social surplus that could be taxed and distributed, it's naive to assume that this will be automatic, particularly when the policy mechanisms are this compromised. • Finally, the statement doesn’t commit to continuing to address these risks, but only narrowly to “addressing the risks of AI to information integrity” and “continue the work on AI transparency”. In other words, they’re purposefully downplaying any more significant potential risks, likely because discussing more serious risks would get in the way of convincing companies to invest in France.

Unfortunately, France has sold out humanity for short-term commercial benefit and we may all pay the price."


Most of the comments here only make sense under a model where AI isn't going to become extremely powerful AI in the near term.

If you think upcoming models aren't going to be very powerful, then you'll probably endorse business-as-usual policies such as rejecting any policy that isn't perfect or insisting on a high bar of evidence before regulating.

On the other hand, if you have a world model where AI is going to provide malicious actors with extremely powerful and dangerous technologies within the next few years, then instead of being radical, proposal like this start appearing extremely timid.


Released 9th October, 2023


Will Oprah screw up the AI story?

Quite possibly, but likely not as bad as this article.

Complete clickbait title, assumes that the author's hobby horses are the most important thing in the world, bizarrely argues that crypto hype is an "attack on labour".


I agree the article's pretty bad. There's a description of the show's content here https://deadline.com/2024/08/oprah-winfrey-host-ai-abc-speci... And Oprah talking about AI a Reid Hoffman + GPT4 book here https://youtu.be/bOXRjnXp3-s Quote from Oprah:

>I am just in this moment where I'm fascinated by how this technology AI is going to be a resource for change and improvements and making the world better and yet the other side of it who's in control of it and what happens when the bad guys get it.


Just thought I'd add a comment as someone who came top of the state in my grade in multiple olympiad competitions:

I always felt that a large part of my advantage came from having a strong understanding of maths from the ground up.

I felt that a lot more people could have gained the same level of understanding as I did if they had been willing to work hard enough, but I also felt that almost no-one would, because it'd be an incredibly hard sell to convince someone to engage in years-long project where they'd go all the way back to kindergarten and rebuild their knowledge from the ground up.

In other words, excellence is often the accumulation of small advantages over time.


It's not just working hard enough, but also doing the right kind of work. Many people make the mistake of trying to memorize things without understanding. Which may be easy at the beginning when you memorize a fact or two, but it gradually accumulates, especially in math when the old topics never go away as the new ones are introduced. And then the memorizers are actually working much harder, and even that is not enough, so they fail.

So why the aversion to understanding? I suspect part of that is generational; if your parents sucked at math because they relied on memorization, they probably won't introduce you to math as an something worth understanding. It will either be "give up", or "work harder" but in the sense of memorizing harder. Not just your parents, but the entire culture around you will be like that. Another part is that most math teachers at elementary schools actually suck at math; because teachers are many, but people good at math are few and they have many better careers available. But another problem is the insistence of school system on everyone going forward at a predetermined speed -- sometimes understanding takes time, and when you don't have the time, you are forced to memorize; but once you start memorizing, you usually need to keep memorizing, because understanding can only be built on understanding the prerequisites.

Properly taught elementary-school math should be fun, like this: https://www.matika.in/en/ Fun makes people think.


A lot of people don't understand what understanding really even entails. They don't know that some people actually understand a topic/idea/whatever, can play around with the ideas in their head, think from first principles on the topic. They've never understood a topic in their life.


If passion, or own experience, is missing it may be a case of unknown unknowns for both parents and teachers.

The Matika site looks really nice but I have difficulties comprehending the instructions. Even the very first one for first grade. “Children step by record.” What does that mean? I tried the next one. “During addition we write addends below each other…” What? If all addends are below, no addend is on the top. It makes no sense. Then, “…and the sum below the line” with no line in sight. What, where, which line, how? That was frustrating.


Wow, the English translation sucks much more than I noticed. :(

The whole "stepping" thing is a reference to how they (in the web page author's country) teach basic addition and subtraction at first grade. There is a carpet with numbers on the floor, you start at number zero, and do addition like "2+3" means "two steps forward, pause, three steps forward, now look at the number you are standing on". The carpet is situated so that from the sitting kids' position the zero is on the left, and the numbers increase to the right.

The idea is to turn integers into something "tangible", in a way that can later be extended to negative numbers.

So the instructions should be like: "You start at given number. Right arrow means a step forward to a greater number. Left arrow means a step backward to a smaller number. What number you end at?"

Sorry, I already know all these things by heart, so I didn't notice how the English instructions don't make sense. Guess I should contact the author about it.


I feel the same way when I'm on hold and a recording tells me "your call will be answered in the order it was received". This isn't about grammatical pedantry -- I don't care that they didn't say in which -- it's about it not making sense. Which, as I said, isn't grammatical pedantry. But it probably is still a bit pedantic. Still, though, how can one thing have an order? What order was my call received in? Is it before or after itself? I get the sense that whoever recorded that didn't spend any time actually thinking about it, or they would have said "Calls are answered in the order [in which] they are received" or something.


Understanding is critical.

I unfortunately spent the entire introduction to calculus in hospital, so missed it - when I came back to school, I was dropped straight into “differentiate this” and “integrate that”. There was no explanation of what either operation was, just the rules that you followed to obtain the result. I had no idea that we were looking at rates of change or at areas under curves. For the first time in my life, I found myself bewildered, and struggling - until a month later I happened to find myself reading a biography of Newton which actually explained what the purpose of calculus/fluxions was - and then it became easy, as it was obvious if a result was nonsensical.


I knew people who somehow missed the information that fractions are the same as division. So they could e.g. reduce the fraction 40/20 to 4/2, then after thinking about it longer also to 2/1, and... then had no idea what to do next.

For me it was completely mind-blowing, how someone could do fractions without understanding what they are? But I imagine, at school they probably could solve some problems, couldn't solve others, got C, moving on to the next lesson.


I just think for some people math is very fun since the very young age and so they of course practice it. For others it may not be, so it is hard work for them. E.g. I have always enjoyed ever since I can remember doing these exercises. When walking home I used to multiply different numbers as a past time in my head. Most people are not going to do those things, and it didn't feel hard work at all.

In first grade I used to run through workbooks being addicted to solving those problems like some addictive mobile or video game and at that point teacher had to stop me and I was frustrated.

I only had this addiction to math and physics - a bit to chemistry, and I couldn't really focus on other text / memorization based subjects.

And it makes sense to me that genetically in a population you will have brains out of the box that are naturally optimized for different specialities, since having a specialized brain allows you to have more power in that specific area. Problem is when you force those specialised brains into the same way of studying.


Exactly. We enjoy different activities. For math oriented kids it's not a grind, it's interesting and fun. For me, reading novels was much more of a grind, as I just wasn't that interested in people and their conflicts and condition.

It took until my twenties that I could realize the value in humanities and "social" topics.

Similarly most people will naturally learn about countless types of fashions and connotations of liking various music bands etc which is actually quite a lot of information to memorize. But it's fun and feels relevant while math feels disembodied and irrelevant to their social goals in life.


I think mathematics education is pretty horrible this way. You only start actually learning the foundations of math in your 3rd or 4th year of undergrad.

At least nowadays there is a shit ton of youtube resources and more, so a self interested kid can learn it far easier. I tried and the books that were out there were... sparse and textbooks are written for other professors, not students.


I can only blame teachers. In primary school after four years they finally managed that there's only on kid per class left (that would be you I guess) has fun with math. At home I am fighting an uphill battle because I know it can be fun (my kid even likes logic puzzles). Living in Germany, for the record.


I'd rather blame the system that teaches the teachers. I'm certainly not going to blame someone for not knowing how to teach an onramp that they don't even know exists because they themselves were never taught properly.


Sometimes it’s just the teacher.

I loved reading until my grade 4 teacher decided we would all write a book report every week. Haven’t read a story book since. It’s been 20+ years.

Forced fun is never fun. The other grade 4 class wrote three that year.


People react differently to task and teachers, there is no one way to do it. I got good grades because a teacher let us repeat a task like "write a report" seven-fourteen times. The feedback was given by him in class highlighting the important points of getting a good grades, and then 24 hours after handing in the report we got it back with notes mentioning which important parts we had missed.

This thought me the rather simple lesson that getting it right on the first try is really hard.

Writing a book report is completely different from reading a book. I have heard people doing literature in university being sick of books because of the same issue.


That sounds like a dream. I have a distinct memory of doing in class writing in 3rd grade where the teacher would force us to redo it if there were mistakes and give minimal feedback. As far as my 3rd grade memory can recall, I rewrote it several thousands times and never got it all the way right.


It was a dream for ME. I always think of that teacher when I do code reviews. The important part is how you manage to communicate things effectively. I had one friend who never managed to get better and complained, not loudly but it was clear it felt like hell to him.

I do not know if the instructions would have worked for you, maybe it just worked on the ones that really saw a need to improve in this specific task. I know most people missunderstand me when I give out instructions.


Yet understanding is necessary but not sufficient when you read university math, especially advanced courses.

Proofs assume you have the elusive thing referred to as ”mathematical maturity”, which means many algebraic manipulation steps are skipped because it’s assumed you can just see the result straight away.

This ability to see the connection is not understanding but learning by rote, having done the same tricks with similar equations a thousand times.

This is what makes advanced math books/courses slow for me as a CS phd researcher. I can very slowly progress through, but it takes a massive amount of time to work through what just happened. If you take 60 instead of 20 courses on math the routine you have is just completely different. I guess you can call it fluency in the language.

(For example now I’m reading optimal control & variational calculus along with the functional analysis it needs, its heavy.)


Most kids don't build up knowledge over time, they forget it all over summer vacation.


Very well put. Many people are very blind for this, they forget that everything they can do they at some time had to learn as well. And not everyone learns everything at the same time.

Anecdotally, something I can actually confirm from personal experience with math. As long as I could remember, I had trouble with a lot of it.

Then during the last years of high school I had an excellent teacher and a lot of concepts actually did start to click on some level. Frustratingly, I still had a lot of trouble. While I understood the abstract concepts much better.

In order to solve issues, I still had to apply a lot of concepts I was supposed to have learned in all the years previously.


How would you approach rebuilding foundational knowledge from the kindergarten level? I have completed all the courses on Khan Academy from kindergarten through 6th grade and have also practiced with more challenging problems beyond those provided on Khan Academy. I'm trying to find the most effective strategies to solidify these fundamental skills.


The idea of starting from scratch and rebuilding one's knowledge, especially when it means going back to the basics, is daunting


I think much of that 'daunt' comes from the lack of instructional resources needed to support a solo journey through higher math. Yes, there are some great illuminating sources (like Kahn Academy and 3blue1brown), but if you're embarking on an epic quest (like recapping a BA in math), the essential guidance needed for coherent and graceful passage through all the requisite concepts simply does not exist -- short of reading 20 HS and college textbooks, which will subject you to a maddening amount of redundancy while leaving many fundamental concepts underexplained.

The day that large language models can capably tutor me through the many twisted turns of higher math -- that's when I'll believe that deep AI has achieved something truly useful.


Can you link a chat and show specifically where one falls off explaining, eg complex numbers or integration by parts? it's been a while since my math minor, but ChatGPT seemed to be able to guide me through what I recall of those topics.


I always sucked at math, even though I did it in undergrad. I basically did this over the course of the last five years to try get better. It went something like this:

Spivak - Calculus. This was a bad idea. Got maybe 30% of it. Gave up at Taylor series.

Hammack - Book of Proof. Finally understood how to prove things, and induction arguments.

Abbot - Understanding analysis. Got far, things fell apart around the Gamma function.

Apostol - Volume I. Got better at calculus. Also trigonometry. Exercises were easy. Skipped differential equations. It was too hard.

Hoffman/Kunze - Linear Algebra. Gave up after a few chapters, too hard.

Friedman/Insel - Linear Algebra. Much better, got to the Spectral Theorem and gave up.

Rudin - Principals of Mathematical Analysis. Absolutely brutal, probably got 30% of it.

Abbot, round 2. Much easier this time, got through the whole book.

Spivak, round 2. Much better, got through the whole book. Actually found it easy.

Hubbard - Vector Calculus. Gave up early, it was too hard.

Apostol - Volume 2. Much better. Stopped somewhere in the middle when it got too focused on differential equations and physics stuff.

Back to Friedberg / Insel - Made it through the spectral theorem.

In between I was doing a lot of mathematical statistics and probabilty stuff like Casell-Berger (I did this book twice, each time going back to the math where I floundered). I’ve worked through just about every exercise in the above books and watched YouTube video lectures where they exist (there is a good one for Rudin). Solution manuals sometimes exist, sometimes you have to find university courses based on the books and look for homework assignments where they have posted solutions, Quizlet has ok solutions, some are buggy. Apostol volume I some dude worked through and posted online.

Anyway point is I refused to accept how stupid I am and I brutally forced myself to become better at math. My attitude was I don’t give a fuck how long it takes, I will keep going until I get better.

I think I’m better now, although I’m still shit. It’s true what von Neumann said: In mathematics you don’t understand things, you just get used to them.


As a fellow brute forcer I can appreciate this comment a lot.


What are the fundamentals one should learn in kindergarten, elementary school, etc?


I'm not going to try to recap all of that, but, as an example, if you have a sufficiently strong understanding of arithmetic, learning basic modular arithmetic should be effortless, pigeonhole principle completely obvious.

I was quite surprised when I tried applying for a Microsoft internship in uni and they gave me a question on the pigeon-hole principle.


I expect this to end up having been one of the worst timed blog posts in history. Open source AI has mostly been good for the world up until now, but we're getting to the point where we're about to find out why open-sourcing sufficiently bad models is a terrible idea.


I'm just going to say it.

The author is an idiot who is using insults as a crutch to make his case.


Did you read the report? It's answer for basically anything contentious was, "views differ"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: