Everything we know, updated as the rollout lands
gpt-image-2 is OpenAI's next native image generation model. It's not public yet, but LM Arena leaks, an internal ChatGPT A/B, and the May 12 DALL-E retirement deadline bracket the launch window tightly. This page is our running log of what's confirmed, what's rumored, and how we'll flip the switch the moment the fal endpoint lights up.
Six things the new model does better
Multi-word labels, UI microcopy, hand lettered signage, and CJK scripts come out crisp and correctly kerned at first pass, a jump from the 90 to 95 percent glyph accuracy of the previous generation.
The faint amber wash that shipped in gpt-image-1.5 daylight scenes is gone. Daylight renders as daylight, gray as gray, and brand colors respect the hex you specified.
Over 70 percent of blind test participants misclassify GPT Image 2 portraits as real photographs. Skin micro detail, catchlights, and hand geometry all land cleaner than any prior OpenAI image model.
Square, 3:2, 9:16, 16:9, and a native 3840 by 2160 hero tier. No more upscaling a 1792 by 1024 in post for a billboard or a landing page banner.
Pass a reference image plus an edit instruction, get back a variant that preserves pose, composition, and lighting while honoring the delta. Ideal for UI text edits, product label swaps, and background replacements.
The previous model ran a chat planner turn before the image turn, which added 6 to 10 seconds of overhead. GPT Image 2 runs single-pass. Expected latency drops from 8 to 12 seconds to under 3 seconds at medium quality.
From native image generation to public GPT Image 2
OpenAI launches the first iteration of the native image path. Model takes a prompt directly and returns pixels. DALL-E 3 starts the long goodbye.
Incremental upgrade focused on multi-object layouts and somewhat better text. Available on fal.ai as fal-ai/gpt-image-1.5 and fal-ai/gpt-image-1.5/edit.
packingtape-alpha, maskingtape-alpha, and gaffertape-alpha show up on LM Arena for about 36 hours. They behave like GPT Image 2 with different sampling budgets.
Power users post renders that do not match any public model. File metadata hints at a new model string. Consistent with a soft internal rollout two to six weeks before public launch.
Hard upper bound on when GPT Image 2 must be public on the API. OpenAI does not retire a product line without a replacement, so the launch window is bracketed to the week or two before this date.
Community bookmakers converge on this window at roughly 40 percent probability. Endpoint on fal.ai expected to appear as fal-ai/gpt-image-2 with an /edit variant mirroring gpt-image-1.5.
Where we expect the price sheet to land
OpenAI and fal.ai haven't published final numbers. These are the prices the fal-ai/gpt-image-2 preview is currently billed at for partner access. Expect the public launch to stay within about 25 percent of these numbers.
| Resolution | Low quality | Medium quality | High quality |
|---|---|---|---|
| 1024x1024 | $0.01 | $0.04 | $0.13 |
| 1024x1536 | $0.02 | $0.08 | $0.29 |
| 1920x1080 | $0.03 | $0.10 | $0.37 |
| 2560x1440 | $0.05 | $0.17 | $0.66 |
| 3840x2160 | $0.10 | $0.38 | $1.48 |
Tier gaps are roughly 4x between low and medium and 3x between medium and high. Budget by intent: draft at low, review at medium, deliver at high.
How GPT Image 2 stacks up
| Model | Best for | Text | Photoreal | Edit mode | Price band |
|---|---|---|---|---|---|
| GPT Image 2 | Primary for text-critical and UI work | near 99% | class leading | yes, high fidelity | moderate |
| GPT Image 1.5 | Current fallback on fal.ai | ~90-95% | strong | yes | moderate |
| Flux 2 Pro | Photoreal, textless creative | limited | class leading | yes | moderate |
| Imagen 4 | Clean Google-style output | solid | strong | limited | lower |
| Nano Banana Pro | Stylised marketing creative | limited on long copy | strong | yes | moderate |
| Ideogram 3 | Typographic poster work | strong | moderate | limited | lower |
What the new model unlocks
Campaign hero stills, social ad variants, email headers. The typography upgrade lets a single operator replace the designer in the loop for non-hero creative.
Matte background swaps, seasonal variants, packaging mockups without a physical studio. Edit mode preserves the product silhouette while restaging the environment.
Generate dashboard, mobile app, and operating system screenshots with readable button copy and realistic microcopy. Ship them in product marketing, investor decks, and internal specs without a designer touching them.
Blog covers, editorial illustrations, newsletter openers. The model understands tone descriptors (editorial, deadpan, cinematic, painterly) and keeps them consistent across a series.
Lock a reference in /style-reference and the whole asset library carries the same mood, palette, and rendering treatment. Fast way to scale a visual system across thousands of images.
Same FAL_KEY, same fal.subscribe helper, one endpoint change when the public release lights up. Wire GPT Image 2 into a SaaS feature, an automation platform, or a custom pipeline in a few lines of TypeScript or Python.
Join the waitlist to get pinged the moment gpt-image-2 flips on for this playground. While you wait, the tools here run against the current OpenAI image endpoint on fal.ai so you can iterate.