Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You can now get high quality answers for technical questions in 10 seconds instead of 50.

ChatGPT 4 does not take 50 seconds to answer, so I don't understand this comparison.



Recently I've used gpt 4 and yes it does take up to a minute even for easy questions.

I've asked it how to scp a file on Windows 11 and it'll take a minute to tell me all the options possible.

If this takes 1/5th the time for equivalent questions, I'd consider switching


Not my experience at all. Are you counting the entire answer in your time?

If so, consider adding one of the “just get to the point” prompts. GPT4’s defaults have been geared towards public acceptance through long-windedness which is imo entirely unnecessary when using it to do functional things like scp a file.


LOL, it’s not just for “public acceptance”. Look up Chain of Thought. Asking it to get to the point typically reduces the accuracy.


> LOL, it’s not just for “public acceptance”. Look up Chain of Thought. Asking it to get to the point typically reduces the accuracy.

Just trying to provide helpful feedback for you, this would have been a great comment, except for the "LOL" at the beginning that was unnecesary and demeaning.


You are being snarky but are right. I have scripts set up to auto summarise expansive answers. I wish I could build this into the ChatGPT ui though.


I know this is silly, but I've had great success asking chatgpt to summarise chatgpt's answers.


Try the custom instructions feature


The words "briefly" or "without explanation" work well.

By keeping the prompt short, it starts generating output quicker too.


Yeah, I would say this is a prompting problem and not a model problem. In a product area we're building out right now with GPT-4, our prompt (more or less) tells it to provide exactly 3 values and it does that and only that. It's quite fast.

Also, use case thing. It is very likely the case that for certain coding use cases, Phind will always be faster because it's not designed to be general purpose.


This isn't a fair comparison because I have custom instructions that mention being brief but complete, but I did "how to scp a file on Windows 11"

ChatGPT4: 14 seconds

phind with "pair programmer" checked: 65 seconds

phind default: 16 seconds


Take a look at the AutoExpert custom instructions: https://github.com/spdustin/ChatGPT-AutoExpert

It lets you specify verbosity from 1 to 5 (e.g. "V=1" in the prompt). Sometimes the model will just ignore that, but it actually does work most of the time. I use a verbosity of 1 or 2 when I just want a quick answer.


> I've asked it how to scp a file on Windows 11 and it'll take a minute

https://imgur.com/a/iqxOJUV was 6.5 seconds.

https://imgur.com/a/pQFfWli was 15.

You can tell they're GPT-4 because the logo is purple (the logo is green when using 3.5).


ChatGPT4 is more often than not noticeably slow enough that I question why I pay for it.


Sometimes it's insanely quick - like gpt3,5 turbo or a cached answer or something.


We find that it takes around a minute for a 1024-token answer. Answers to less complex questions will take less time, but Phind will still be 5x faster.


That really depends on the complexity of your request and any prompt engineering techniques in use for that request. Especially with "think step by step" in certain contexts, it can improve answer quality at the expense of generation time (because more tokens are emitted).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: