OpenAI shipped GPT-5.5 and GPT-5.5 Pro on April 23, 2026. The headline numbers: 40% fewer tokens used per task than GPT-5.4, roughly 20% higher per-token pricing, TerminalBench at 82.7%, and GPDval at 84%. The model is explicitly tuned for long-running agentic tasks rather than single-turn quality. For brands, three things change at once: the training cutoff resets, the tokenization shifts, and the agentic deployment patterns intensify.
What actually shipped
GPT-5.5 is the new default for ChatGPT Plus and the new floor for the API. GPT-5.5 Pro is the higher-effort variant for paying API customers and Pro subscribers, optimized for tool use and extended chains of reasoning. Both share the same training cutoff and tokenizer.
The 40% token reduction is the practically important number. Tasks that took 15,000 tokens on GPT-5.4 take roughly 9,000 on GPT-5.5. Combined with the 20% price increase, the net cost per task drops substantially. That changes adoption math: developers who skipped GPT-5.4 for cost reasons now reach for GPT-5.5.
OpenAI also shipped GPT-5.5 alongside Codex Computer Use (now at 4 million users), expanded background execution, and Chronicle (a screen-memory feature for agents). GPT-5.5 is the model behind the long-running agentic flows OpenAI is pushing.
The training cutoff refresh: what enters and exits the model's memory
Every major model release shifts the parametric recall window. GPT-5.4 had a cutoff that excluded most of late 2025 and early 2026. GPT-5.5 ingests that window. For brands, this means three things.
First, brands that earned major coverage between the previous and current cutoff (TechCrunch features, Wikipedia entries, top-tier press) now get folded into GPT-5.5's default recall. Brands without that coverage stay out.
Second, brands that had stronger presence in the old training data but have gone quiet may see their default-mode mentions decay relative to peers who maintained recent coverage. Recall is not a one-time achievement.
Third, the cutoff change does not affect ChatGPT Search mode, which always retrieves live. So brands strong in retrieval (PerplexityBot-friendly content, schema.org markup, fast page loads) keep their citation rate independent of training cycles.
Tokenization changes mean entity-recall edge cases shifted
Less talked about: GPT-5.5 uses a refined tokenizer compared to 5.4. Multi-token brand names get re-mapped, which can change how cleanly the model resolves entity disambiguation. If your brand name shares tokens with a more famous entity (an unfortunately common situation for two-syllable startup names), the new tokenization may shift whether the model surfaces you, the more famous entity, or hallucinates between them.
The practical implication: re-test your brand name disambiguation under GPT-5.5 specifically. Run "what is [your brand]" and "[your brand] vs [closest-name competitor]" prompts. If the model now confuses you with another entity, you have a Wikidata or sameAs schema fix to make.
Agentic deployment intensifies the visibility-as-tool layer
GPT-5.5 is the model behind Codex Computer Use's 4 million active users. When a developer asks an agent to "set up our CRM, send an invoice, and log it in our PM tool," GPT-5.5 picks the tools to call. If you have an MCP server, you are in the consideration set. If you do not, the agent picks a competitor that does, or a community-maintained server pretending to be you.
The brand visibility issue with MCP got materially more urgent with GPT-5.5 because the agentic deployment volume just stepped up.
What to test this week
1. Run your full prompt set on GPT-5.5 default. Compare mention rate, framing, and competitive position to your last GPT-5.4 baseline. Differences tell you what the cutoff change did to your recall.
2. Run brand-name disambiguation prompts on GPT-5.5. Check whether the new tokenizer broke any entity links you previously had clean.
3. Test GPT-5.5 with Codex Computer Use against your category workflows. Does the agent reach a working integration with your product, or does it stall? The cost of stalling is the lost deal.
4. If your competitive shortlist shifted on GPT-5.5, audit which sources changed: Wikipedia, G2, TechCrunch, Reddit. The source whose change moved the model is the source you need to invest in or counter.