Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
labrador
68 days ago
|
parent
|
context
|
favorite
| on:
Expanding on what we missed with sycophancy
see my other comments about the trustworthiness about asking a chat system how it's internals work. They have reason to be cagey.
malfist
68 days ago
[–]
Your personifying a statistical engine. LLMs aren't cagey. They can't be.
transcriptase
65 days ago
|
parent
|
next
[–]
They can when there are entire teams dedicated to adding guardrails via hidden system prompts and running all responses through other LLMs trained on flagging and editing certain things before the original output gets relayed to the user.
labrador
67 days ago
|
parent
|
prev
[–]
I'm not. Translation: "the statistical engine has been tuned to act cagey about revealing it's internal operation"
Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: