Hacker Newsnew | past | comments | ask | show | jobs | submit | aidaman's commentslogin

I tend not to listen to people who have no fucking clue what they're talking about.


Unlikely. We'll know when OpenAI has declared itself ruler of the new world, imposes martial law, and takes over.


Why would you ever know? Why would the singularity reveal itself in such an obvious way(until it's too late to stop it)?


One day before he was fired by OpenAI’s board last week, Sam Altman alluded to a recent technical advance the company had made that allowed it to “push the veil of ignorance back and the frontier of discovery forward.” The cryptic remarks at the APEC CEO Summit went largely unnoticed as the company descended into turmoil.

But some OpenAI employees believe Altman’s comments referred to an innovation by the company’s researchers earlier this year that would allow them to develop far more powerful artificial intelligence models, a person familiar with the matter said. The technical breakthrough, spearheaded by OpenAI chief scientist Ilya Sutskever, raised concerns among some staff that the company didn’t have proper safeguards in place to commercialize such advanced AI models, this person said.

THE TAKEAWAY • OpenAI researchers made a breakthrough in recent months that could lead to more powerful AI models • Researchers used the new technique to build a model that could solve math problems it had never seen before • The breakthrough raised concerns among some OpenAI employees about the pace of its advances and whether it had safeguards in place In the following months, senior OpenAI researchers used the innovation to build systems that could solve basic math problems, a difficult task for existing AI models. Jakub Pachocki and Szymon Sidor, two top researchers, used Sutskever’s work to build a model called Q* (pronounced “Q-Star”) that was able to solve math problems that it hadn’t seen before, an important technical milestone. A demo of the model circulated within OpenAI in recent weeks, and the pace of development alarmed some researchers focused on AI safety.

The work of Sutskever’s team, which has not previously been reported, and the concern inside the organization, suggest that tensions within OpenAI about the pace of its work will continue even after Altman was reinstated as CEO Tuesday night, and highlights a potential divide among executives.

In the months following the breakthrough, Sutskever, who also sat on OpenAI’s board until Tuesday, appears to have had reservations about the technology. In July, he formed a team dedicated to limiting threats from AI systems vastly smarter than humans. On its web page, the team says, “While superintelligence seems far off now, we believe it could arrive this decade.”

Last week, Pachocki and Sidor were among the first senior employees to resign following Altman’s ouster. Details of Sutskever’s breakthrough, and his concerns about AI safety, help explain his participation in Altman’s high-profile ouster, as well as why Sidor and Pachocki resigned quickly after Altman was fired. The two returned to the company after Altman’s reinstatement.

In addition to Pachocki and Sidor, OpenAI President and co-founder Greg Brockman had been working to integrate the technique into new products. Last week, OpenAI’s board removed Brockman as a director, though it allowed him to remain as an employee. He resigned shortly thereafter, but returned when Altman was reinstated.

Sutskever’s breakthrough allowed OpenAI to overcome limitations on obtaining enough high-quality data to train new models, according to the person with knowledge, a major obstacle for developing next-generation models. The research involved using computer-generated, rather than real-world, data like text or images pulled from the internet to train new models.

For years, Sutskever had been working on ways to allow language models like GPT-4 to solve tasks that involved reasoning, like math or science problems. In 2021, he launched a project called GPT-Zero, a nod to DeepMind’s AlphaZero program that could play chess, Go and Shogi. The team hypothesized that giving language models more time and computing power to generate responses to questions could allow them to develop new academic breakthroughs.

An OpenAI spokesperson declined to comment.


it was a really long time ago


Now I believe him :)


one of the first members to quit was on a team that sounds a lot like a separate team that is doing the same thing as Ilya's Superalignment team.

"Madry joined OpenAI in May 2023 as its head of preparedness, leading a team focused on evaluating risks from powerful AI systems, including cybersecurity and biological threats."


how many coups will it take to end this


Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


this is by far the craziest timeline


It will make a great movie though.


I hope they get David Fincher to direct, based on a screenplay by Aaron Sorkin, and have Jessie Eisenberg playing sama. Andrew Garfield can play Brad, and Justin Timberlake can play the part of Ilya Sustskever. And of course the score has to be by Atticus Ross and Trent Reznor.


you're right. agi has been here since GPT-3 at the least.

it's honestly sad when people who have clearly not use gpt4 would call it a parroting machine. that is incredibly ignorant.


Let me know when GPT can even play chess without making invalid moves, then we can talk about how capable it is of logical thinking.


Let me know when you can prove that "logical" and "intelligent" were ever stored on the same shelf, much less being meaningfully equivalent. If anything, we know that making a general intelligence (the only natural example of it we know) emulate logic is crazily inefficient and susceptive to biases that are entirely non-existent (save for bugs) in much simpler (and energy-efficient) implementations of said logic.


An AGI that can't even play a game of chess, a game that children learn to play, without making an invalid move doesn't really sound like an AGI.


holy fk


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: