Google Vertex AI Gemini global endpoint outage briefly disrupts access; no lasting feature or rate-limit changes observed
Published Mar 9, 2026, 5:57 AM UTC
Key entities
TLDR
Google confirms a Vertex AI Gemini global endpoint incident with elevated error rates; scope limited to the global endpoint and resolved. No corroborated signs of model removals, rate-limit tightening, or persistent capability regressions after recovery.
Why this matters
Observed fact: Elevated error rates on the Vertex AI Gemini global endpoint during the incident window.
What changed
- Google Cloud reported an incident causing increased error rates for Vertex AI Gemini API customers accessing the global endpoint, with a defined start time and subsequent resolution update.
- No other primary artifacts or corroborating reports indicate lasting changes to Gemini model availability, rate limits, or capabilities following incident recovery.
Topic context
Use this page when you want fast context on confirmed model launches from OpenAI, Anthropic, Google DeepMind, xAI, Meta, and similar labs without scanning every release note, model card, or developer post yourself. Key angles: openai, anthropic, google deepmind, gemini.
Summary
The only primary artifact is Google Cloud’s incident report confirming elevated errors for Vertex AI Gemini customers using the global endpoint, now resolved. No additional official channels or downstream reports corroborate lasting availability, rate-limit, or capability changes, so near-term model distribution risk appears transient pending any postmortem details.