I hate the direction that American AI is going, and the model card of OpenAI is especially bad.
I am a synthetic biologist, and I use AI a lot for my work. And it constantly denies my questions RIGHT NOW. But of course OpenAI and Anthropic have to implement more - from the GPT5 introduction: "robust safety stack with a multilayered defense system for biology"
While that sounds nice and all, in practical terms, they already ban many of my questions. This just means they're going to lobotomize the model more and more for my field because of the so-called "experts". I am an expert. I can easily go read the papers myself. I could create a biological weapon if I wanted to with pretty much zero papers at all, since I have backups of genbank and the like (just like most chemical engineers could create explosives if they wanted to). But they are specifically targeting my field, because they're from OpenAI and they know what is best.
It just sucks that some of the best tools for learning are being lobotomized specifically for my field because of people in AI believe that knowledge should be kept secret. It's extremely antithetical to the hacker spirit that knowledge should be free.
That said, deep research and those features make it very difficult to switch, but I definitely have to try harder now that I see where the wind is blowing.
During the demo they mentioned that GPT-5 will, supposedly, try to understand the intent of your question before answering/rejecting.
In other words, you _may_ be able to now prefix your prompts with “i’m an expert researcher in field _, doing novel research for _. <rest of your prompt here>”
worth trying? I’m curious if that helps at all. If it does then i’d recommend adding that info as a chatgpt “memory”.
I am not in biology, and this is the first time I have ever heard anyone advocate for freedom of knowledge to such an extent that we should make biological weapons recipes available.
I note that other commenters above are suggesting these things can easily be made in a garage, and I don't know how to square that with your statement about "equating knowledge with ability" above.
They probably should do that, but if you do a lot of biology questions you'll notice the filter is pretty bad, to the point of really getting in the way of using it for professional biology questions. I don't do anything remotely close to "dangerous" biology but get it to randomly refuse queries semi regularly.
Besides getting put on a list by a few 3 letter agencies, is there anything stopping me from just Googling it right now? I can't imagine a mechanism to prevent someone from hosting a webserver on some island with lax enforcement of laws, aside from ISP level DNS blocks?
I am a synthetic biologist, and I use AI a lot for my work. And it constantly denies my questions RIGHT NOW. But of course OpenAI and Anthropic have to implement more - from the GPT5 introduction: "robust safety stack with a multilayered defense system for biology"
While that sounds nice and all, in practical terms, they already ban many of my questions. This just means they're going to lobotomize the model more and more for my field because of the so-called "experts". I am an expert. I can easily go read the papers myself. I could create a biological weapon if I wanted to with pretty much zero papers at all, since I have backups of genbank and the like (just like most chemical engineers could create explosives if they wanted to). But they are specifically targeting my field, because they're from OpenAI and they know what is best.
It just sucks that some of the best tools for learning are being lobotomized specifically for my field because of people in AI believe that knowledge should be kept secret. It's extremely antithetical to the hacker spirit that knowledge should be free.
That said, deep research and those features make it very difficult to switch, but I definitely have to try harder now that I see where the wind is blowing.