Hacker Newsnew | past | comments | ask | show | jobs | submit | MrDunham's commentslogin

I mentioned in another comment how hard it is for our brains to really comprehend the orders of magnitude difference between all animal cases (~680) and the former number of human cases (3.5M).

It would take ~5000 years at the current annual rate of animal cases to match the number of human cases just 40 years ago.

That's The Great Pyramid of Giza ago time... PLUS the amount of time since Michelangelo, Leonardo da Vinci, and Raphael roamed the earth.

The cool thing is that at a few hundred, one could theoretically* round up all (known) animal cases left. That's truly incredible work getting to this point if you think about it.

* Yes, geopolitical issues, geography, and plenty of other reasons might make this somewhat impossible... but the fact that we can actively picture a few hundred animals in our brains means that it's a very attainable goal.


Those are bonkers (low) numbers compared to the 3.5M (human?) cases if I'm to believe the GPs comment.

It's also crazy how much Mother Theresa's quote rings true, even in reverse ("If I look at the mass, I will never act. If I look at the one, I will.") When I initially read 3.5M cases, I thought "wow, that's a lot", and somehow the 445 animal cases in Cameroon felt (at first) more real and similarly "a lot".

No comment other than interesting how our human brains work and distort how numbers "feel".

Once my rational brain kicked in, realized that's over 5,000 years for the current number of animal cases to match the former number of human cases. The future is awesome.


If you halve the cases every year you'll eradicate it in a generation.

My technical cofounder reminds me of this story on a weekly basis.


I love Anthropic's models but their realtime voice is absolutely terrible. Every time I use it there is at least once that I curse at it for interrupting me.

My main use case for OpenAI/ChatGPT at this point is realtime voice chats.

OpenAI has done a pretty great job w/ realtime (their realtime API is pretty fantastic out of the box... not perfect, but pretty fantastic and dead simple setup). I can have what feels like a legitimate conversation with AI and it's downright magical feeling.

That said, the output is created by OpenAI models so it's... not my favorite.

I sometimes use ChatGPT realtime to think through/work through a problem/idea, have it create a detailed summary, then upload that summary to Claude to let 4.5 Opus rewrite/audit and come up with a better final output.


I use Claude Code for everything, and I love Anthropic's models. I don't know why, but it wasn't until reading this that I realized: I can use Sparrow-1 with Anthropic's models within CVI. Adding this to my todo list.


Website link on Github points to https://deepmyst.com/

But actually hosted on https://www.deepmyst.com/ with no forwarding from the Apex domain to www so it looks like the website is down.

Otherwise excited to deep dive into this as this is a variant of how we do development and seems to work great when the AI fights each other.


It's a good thing (/s) that Chrome now hides that fact. So it looks like the same domain is down on one tab and working on the other.


An agent didn’t do very good job


> "LLMs are terrible at asking questions. They just make a bunch of assumptions and brute-force something based on those guesses."

Strongly disagree that they're terrible at asking questions.

They're terrible at asking questions unless you ask them to... at which point they ask good, sometimes fantastic questions.

All my major prompts now have some sort of "IMPORTANT: before you begin you must ask X clarifying questions. Ask them one at a time, then reevaluate the next question based on the response"

X is typically 2–5, which I find DRASTICALLY improves output.


There was a great comedic bit on this:

"Thank you for flying Delta? I'd fly a kite if it was $11 cheaper"

I couldn't find the comedian, but the truth in it hits.

Side note: if I recall correctly Delta listened to their customers a decade+ back, gave more leg room, then nearly went bankrupt because no one wanted to pay more for the service.


This is the correct answer. I like to go one step further than the root comment:

Nearly all of my "agents" are required to ask at least three clarifying questions before they're allowed to do anything (code, write a PRD, write an email newsletter, etc)

Force it to ask one at a time and it's event better, though not as step-function VS if it went off your initial ask.

I think the reason is exactly what you state @7thpower: it takes a lot of thinking to really provide enough context and direction to an LLM, especially (in my opinion) because they're so cheap and require no social capital cost (vs asking a colleague / employee—where if you have them work for a week just to throw away all their work it's a very non-zero cost).


My routine is:

Prompt 1: <define task> Do not write any code yet. Ask any questions you need for clarification now.

Prompt 2: <answer questions> Do not write any code yet. What additional questions do you have?

Reiterate until questions become unimportant.


Can't speak to if it's common, but it was how I was taught and drove in California... so there was rarely any winter conditions to speak of.

I preferred downshifting VS braking, personally


I was lucky enough to help Singularity University launch their startup accelerator back in 2015 and have Be My Eyes as a portfolio company.

I say lucky because they were such good people with such a great mission. Hans (founder) and Christian (co-founder) were really, really fun to get to spend ~10 weeks with.

The new CEO is fairly recent so I can't vouch either way for him. Hans and Christian were an absolute joy with incredible hearts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: