model roundup

ChatGPT 5.4

3 items · started 2026-04-16 · ongoing (last activity 2026-04-19)

  1. Hi everyone, I’m trying to determine whether other users are seeing a similar behavior change with GPT-5.4 Pro Standard on long-context, high-effort tasks. I’m not claiming a confirmed backend bug.

  2. I gave ChatGPT 5.4 this prompt: invent a new language from scratch, a language that only LLMs can understand, that all LLMs for a given size will understand (because of internal coherence for the language) given an input text in that langu…

  3. So, I get that Adaptive thinking decides how many tokens it would like to use. I usually hate this setting because you have to trust that it knows how many tokens to use before it tries to solve the problem.

← all threads