In a seeming rendition of the classic pre-execution «you ask too much» trope, OpenAI has revealed itself as being—shocker—not so open after all. The AI chatbot company seems to have started sending threatening emails to users who ask the company's latest codename «Strawberry» models questions that are a little too probing.
i get the scary letter if i mention the words «reasoning trace» in a prompt at all, lolSeptember 13, 2024
Some have reported (via Ars Technica) that using certain phrases or questions when speaking to o1-preview or o1-mini results in an email warning that states, «Please halt this activity and ensure you are using ChatGPT in accordance with our Terms of Use and our Usage Policies. Additional violations of this policy may result in loss of access to GPT-4o with Reasoning.»
X user thebes, for instance, claims they receive this warning if they use the words «reasoning trace» in a prompt. Riley Goodside, prompt engineer for Scale AI, received an in-chat policy violation warning for telling the model not to tell them anything about its «reasoning trace», which is pretty concrete evidence that certain potentially suspect probing phrases are banned regardless of context.
So, it seems OpenAI isn't looking to be open regarding its latest model's «reasoning». These models, if you weren't aware, attempt to reason through problems in a linear fashion. Users can see a filtered form of this reasoning but OpenAI keeps the intricacies of it hidden.
OpenAI says the decision to hide such «chains of thought» was made «after weighing multiple factors including user experience, competitive advantage, and the option to pursue the chain of thought monitoring.»
What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.
All of this is a reminder that while yes, technically OpenAI's parent company is a nonprofit, the reality is much murkier than that. The company in fact has a hybrid kind-of-nonprofit-kind-of-commercial
Read more on pcgamer.com