Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

created a summary of comments from this thread about 15 hours after it had been posted and had 1983 comments, using gpt-5-high and gemini-2.5-pro using a prompt similar to simonw [1]. Used a Python script [2] that I wrote to generate the summary.

- gpt-5-high summary: https://gist.github.com/primaprashant/1775eb97537362b049d643...

- gemini-2.5-pro summary: https://gist.github.com/primaprashant/4d22df9735a1541263c671...

[1]: https://news.ycombinator.com/item?id=43477622

[2]: https://gist.github.com/primaprashant/f181ed685ae563fd06c49d...





Wow, the 2.5 Pro summary is far better, it reads like coherent English instead of a list of bullet points.

Someone should start a Gemini-powered blog that distills the top HN posts into concise summaries.

yes, agreed. Context length might be playing a factor as total number of prompt tokens is >120k. Performance of LLMs generally degrade at higher context length.

Why not use the ChatGPT interface instead of the API to save credits? Pass the cookies.

Only have access to GPT-5 through API for now. The amount of tokens (>130k) used is higher than the limit of ChatGPT (128k) so it wouldn't really work well.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: