Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1. It would be nice to see examples where GPT-4o was inaccurate, but best performing models were accurate.

2. It would be nice to try again with 0 temperature, as I do a lot of structured data extraction. In my experience 0 temperature should always be used, and it can make a huge difference. Temperature of 1 essentially means that it will start to pick tokens with lower probability of being accurate...



Agree, temp 0 would be interesting to compare for this usecase where there's a clear right and wrong answer based on historical data. We experimented with temperature for our AI SQL editor and found .3 to be ideal so it can still self heal when errors appear (which will happen closer to 0 because you're optimizing for correctness).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: