Hacker Newsnew | past | comments | ask | show | jobs | submit | usefulposter's commentslogin

>what is Signal trying to sell us?

This: https://arstechnica.com/security/2026/01/signal-creator-moxi...

Great timing! :^)


Moxie left Signal four years ago.

https://www.bbc.co.uk/news/technology-59937614

> Great timing! :^)

And Meredith has been banging this drum for about a year already, well before Moxie's new venture was announced.

https://techcrunch.com/2025/03/07/signal-president-meredith-...


Signal doesn’t have anything to do with Confer besides sharing a founder (who is no longer involved at Signal)

For Confer, even providing an option for Google SSO is too ironic.

Follow the money

Are you offering some?

No.

"Whether we like it or not" is LLM inevitabilism.

https://news.ycombinator.com/item?id=44567857


Yes.

  >Argument By Adding -ism To The End Of A Word
Counterpoint: LLMs are inevitable.

Can't put that genie back in the bottle, no matter how much the powers-that-be may wish. Such is the nature of (technological) genies.

The only way to 'stop' LLMs is to invent something better.


depends if the cost of training can be recouped (with profit) from the proceeds of usage. Plenty of inventions prove themselves to be uneconomic.

Thought terminating cliche.

>With Phind 3, we create a "personal internet" for you

>We think that this current "chat" era of AI is akin to the era of text-only interfaces in computers.

>The new Phind Fast model is based on GLM-4.5-Air while the new Phind Large model is based on GLM 4.6.

So... vaporware that could no longer compete with the big three frontier LLM firms?


This would make a great blogpost.

>I'm always going to be paranoid that I miss some opt-out somewhere

FYI, Anthropic's recent policy change used some insidious dark patterns to opt existing Claude Code users in to data sharing.

https://news.ycombinator.com/item?id=46553429

>whatever vague weasel-words the lawyers made you put in the terms of service

At any large firm, product and legal work in concert to achieve the goal (training data); they know what they can get away with.


I often think suspect that the goal isn't exclusively training data so much as it's the freedom to do things that they haven't thought of in the future.

Imagine you come up with non-vague consumer terms for your product that perfectly match your current needs as a business. Everyone agrees to them and is happy.

And then OpenAI discover some new training technique which shows incredible results but relies on a tiny slither of unimportant data that you've just cut yourself off from!

So I get why companies want terms that sound friendly but keep their options open for future unanticipated needs. It's sensible from a business perspective, but it sucks as someone who is frequently asked questions about how safe it is to sign up as a customer of these companies, because I can't provide credible answers.


"OP here" is the funniest tell that shows up when using an LLM to write a post for HN or Reddit.

It's funny because it makes zero sense in the body of an initial post!

In comments replying to people downthread - maybe. But opening a top-level post with "Original Poster here" is just silly and shows a lack of respect for community etiquette.

https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...


Good catch, think you’re on to something

I just understand it as lightly humorous. Like starting a anecdote with

>be me

Seeing it as a lack of respect is a huge stretch. And kinda conceited that you accuse someone of such, on the basis of a two word opener.


What's the path to profitability for those?


>just keep prompting Claude and I’m sure you’ll get it sorted out

Anecdotally speaking, this is the case for most new Show HNs now :^)


Why actually try to understand a problem space? Far easier to prompt a turd into existence, polish it up with a cliché marketing page, and collect public validation from your fellow “hackers”

Please don't comment about the voting on comments. It never does any good, and it makes boring reading.

https://news.ycombinator.com/newsguidelines.html


Fascinating. Searching https://hn.algolia.com for "zenodo" and "academia.edu" (past year) reveals hundreds of similar "breakthroughs".

The commons (open access repositories, HN, Reddit, ...) is being swamped.


Since OpenAI patched the LLM spiritual awakening attractor state, physics and computer science is what sycophantic AI is pushing people towards now. My theory is that those things tend to be especially optimised for deceit because they involve modelling and many people can become confused between the difference between a model as the expression of a concept and a model as in the colloquial idea of "the way the universe works".

I'd love to see a new cult form around UML. Unified Modeling Language already sounds LLMy.

it's all ai allucination, in a subreddit i once found a tailor asking for how to contact some professors because they found a breakthrough discovery on how knowledge is arranged inside neural networks (whatever that means)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: