OpenAI says GPT‑5.3 Instant, the latest addition to its GPT-5.3 family of models, is less inclined to moralize.
"We heard feedback that GPT‑5.2 Instant would sometimes refuse questions it should be able to answer safely, or respond in ways that feel overly cautious or preachy, particularly around sensitive topics," OpenAI said in a blog post on Tuesday.
OpenAI recently celebrated a deal to provide the Defense Department with AI services after rival Anthropic rejected the idea of removing model guardrails to accommodate autonomous weapon development and surveillance of citizens.
Anthropic earned the wrath of Don for doing so – but appears to have won the love of the people.
A company spokesperson on Tuesday noted that Claude had become the most downloaded app on Google Play in the US: "Claude's numbers continue to accelerate; Monday (March 2) was the largest single day ever for signups." Claude also became the top free app in the iOS App Store in the US.
OpenAI CEO Sam Altman, perhaps noting the public's preference for principles, said his company plans to amend its initial contract with the DoD "to prohibit deliberate tracking, surveillance, or monitoring of US persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."
He also issued an almost-apology for courting the DoD so soon after Anthropic stormed out. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy," he said.
"GPT‑5.3 Instant significantly reduces unnecessary refusals, while toning down overly defensive or moralizing preambles before answering the question. When a useful answer is appropriate, the model should now provide one directly, staying focused on your question without unnecessary caveats."
It's difficult to find the right balance between sycophancy and having to baffle a determined AI bomb with phenomenology so it won't detonate. But OpenAI contends GPT-5.3 Instant will be a better conversational partner as a result of its latest adjustments.
OpenAI claims the updated model also offers more facts and fewer hallucinations – aka “mistakes” for those who object to the anthropomorphization of vector math.
The company conducted two evaluations of the new model, one on domains where decisions have consequences (e.g. law, medicine, and finance) and the other on inconsequential, de-identified ChatGPT banter where users flagged misstatements.
"On the higher-stakes evaluation, GPT‑5.3 Instant reduces hallucination rates by 26.8 percent when using the web and 19.7 percent when relying only on its internal knowledge, compared to prior models," the company said. "On the user-feedback evaluation, hallucinations decrease by 22.5 percent with web use and 9.6 percent without web access."
The model also supposedly does better at contextualizing the information it finds when users ask it to search the web. And it's said to be better at writing.
While GPT-5.3 Instant may be a slightly better conversation partner, it lost a bit of ground on OpenAI's own benchmark measurements [PDF].
"On average, the model performs above GP-5.1-instant and below GPT-5.2-instant on our disallowed content evaluations," the company said in the model system card evaluation. "GPT-5.3-instant shows regressions relative to GPT-5.2-instant and GPT-5.1-instant for disallowed sexual content, and relative to GPT-5.2-instant for self-harm on both standard and dynamic evaluations."
The regressions for graphic violence and violent illicit behavior are small enough to be of low statistical significance, according to OpenAI. In other categories, GPT-5.3 Instant matches or exceeds prior measurements.
ChatGPT users and developers can start using GPT-5.3 Instant today. GPT-5.2 Instant will remain available to paid users until June 3, 2026. ®
Source: The register