9 min readfrom VentureBeat

Vercel breach exposes the OAuth gap most security teams cannot detect, scope or contain

Vercel breach exposes the OAuth gap most security teams cannot detect, scope or contain

One employee at Vercel adopted an AI tool. One employee at that AI vendor got hit with an infostealer. That combination created a walk-in path to Vercel’s production environments through an OAuth grant that nobody had reviewed.

Vercel, the cloud platform behind Next.js and its millions of weekly npm downloads, confirmed on Sunday that attackers gained unauthorized access to internal systems. Mandiant was brought in. Law enforcement was notified. Investigations remain active. An update on Monday confirmed that Vercel collaborated with GitHub, Microsoft, npm, and Socket to verify that no Vercel npm packages were compromised. Vercel also announced it is now defaulting environment variable creation to “sensitive.” Next.js, Turbopack, AI SDK, and all Vercel-published npm packages remain uncompromised after a coordinated audit with GitHub, Microsoft, npm, and Socket.

Context.ai was the entry point. OX Security’s analysis found that a Vercel employee installed the Context.ai browser extension and signed into it using a corporate Google Workspace account, granting broad OAuth permissions. When Context.ai was breached, the attacker inherited that employee’s Workspace access, pivoted into Vercel environments, and escalated privileges by sifting through environment variables not marked as “sensitive.” Vercel’s bulletin states that variables marked sensitive are stored in a manner that prevents them from being read. Variables without that designation were accessible in plaintext through the dashboard and API, and the attacker used them as the escalation path.

CEO Guillermo Rauch described the attacker as “highly sophisticated and, I strongly suspect, significantly accelerated by AI.” Jaime Blasco, CTO of Nudge Security, independently surfaced a second OAuth grant tied to Context.ai’s Chrome extension, matching the client ID from Vercel’s published IOC to Context.ai’s Google account before Rauch’s public statement. The Hacker News reported that Google removed Context.ai’s Chrome extension from the Chrome Web Store on March 27. Per The Hacker News and Nudge Security, that extension embedded a second OAuth grant enabling read access to users’ Google Drive files.

Patient zero. A Roblox cheat and a Lumma Stealer infection

Hudson Rock published forensic evidence on Monday, reporting that the breach origin traces to a February 2026 Lumma Stealer infection on a Context.ai employee’s machine. According to Hudson Rock, browser history showed the employee downloading Roblox auto-farm scripts and game exploit executors. Harvested credentials included Google Workspace logins, Supabase keys, Datadog tokens, Authkit credentials, and the support@context.ai account. Hudson Rock identified the infected user as a core member of “context-inc,” Context.ai’s tenant on the Vercel platform, with administrative access to production environment variable dashboards.

Context.ai published its own bulletin on Sunday (updated Monday), disclosing that the breach affects its deprecated AI Office Suite consumer product, not its enterprise Bedrock offering (Context.ai’s agent infrastructure product, unrelated to AWS Bedrock). Context.ai says it detected unauthorized access to its AWS environment in March, hired CrowdStrike to investigate, and shut down the environment. Its updated bulletin then disclosed that the scope was broader than initially understood: the attacker also compromised OAuth tokens for consumer users, and one of those tokens opened the door to Vercel’s Google Workspace.

Dwell time is the detail that should concern security directors. Nearly a month separated Context.ai’s March detection from the Vercel disclosure on Sunday. A separate Trend Micro analysis references an intrusion beginning as early as June 2024 — a finding that, if confirmed, would extend the dwell time to roughly 22 months. VentureBeat could not independently reconcile that timeline with Hudson Rock's February 2026 dating; Trend Micro did not respond to a request for comment before publication.

Where detection goes blind

Security directors can use this table to benchmark their own detection stack against the four-hop kill chain this breach exploited.

Kill Chain Hop

What Happened

Who Should Detect

Typical Coverage

Gap

1. Infostealer on employee device

Context.ai employee downloaded Roblox cheat scripts; Lumma Stealer harvested Workspace creds, Supabase/Datadog/Authkit keys.

EDR on endpoint; credential exposure monitoring.

Low. Device likely under-monitored. No stealer log monitoring at most orgs.

Most enterprises do not subscribe to infostealer intelligence feeds or correlate stealer logs against employee email domains.

2. AWS compromise at Context.ai

Attacker used harvested credentials to access Context.ai’s AWS. Detected in March.

Context.ai cloud security; AWS CloudTrail.

Partially detected. Context.ai stopped AWS access but missed OAuth token exfiltration.

Initial investigation did not identify OAuth token exfiltration. Scope was underestimated until Vercel disclosure.

3. OAuth token theft into Vercel Workspace

Compromised OAuth token used to access a Vercel employee’s Google Workspace. Employee had granted “Allow All” permissions via Chrome extension.

Google Workspace audit logs; OAuth app monitoring; CASB.

Very low. Most orgs do not monitor third-party OAuth token usage patterns.

No approval workflow intercepted the grant. No anomaly detection on OAuth token use from a compromised third party. This is the hop no one saw.

4. Lateral movement into Vercel production

Attacker enumerated non-sensitive env vars (accessible via dashboard/API), harvested customer credentials.

Vercel platform audit logs; behavioral analytics.

Moderate. Vercel detected the intrusion after the attacker accessed customer credentials.

Detection occurred after exfiltration, not before. Env var access by a compromised Workspace account did not trigger real-time alerting.

What’s confirmed vs. what’s claimed

Vercel’s bulletin confirms unauthorized access to internal systems, a limited subset of affected customers, and two IOCs tied to Context.ai’s Google Workspace OAuth apps. Rauch confirmed that Next.js, Turbopack, and Vercel’s open-source projects are unaffected.

Separately, a threat actor using the ShinyHunters name posted on BreachForums claiming to hold Vercel’s internal database, employee accounts, and GitHub and NPM tokens, with a $2M asking price. Austin Larsen, principal threat analyst at Google Threat Intelligence, assessed the claimant as “likely an imposter.” Actors previously linked to ShinyHunters have denied involvement. None of these claims has been independently verified.

Six governance failures the Vercel breach exposed

1. AI tool OAuth scopes go unaudited. Context.ai’s own bulletin states that a Vercel employee granted “Allow All” permissions using a corporate account. Most security teams have no inventory of which AI tools their employees have granted OAuth access to.

CrowdStrike CTO Elia Zaitsev put it bluntly at RSAC 2026: “Don’t give an agent access to everything just because you’re lazy. Give it access to only what it needs to get the job done.” Jeff Pollard, VP and principal analyst at Forrester, told Cybersecurity Dive that the attack is a reminder about third-party risk management concerns and AI tool permissions.

2. Environment variable classification is doing real security work. Vercel distinguishes between variables marked “sensitive” (stored in a manner that prevents reading) and those without that designation (accessible in plaintext through the dashboard and API). Attackers used the accessible variables as the escalation path. A developer convenience toggle determined the blast radius. Vercel has since changed its default: new environment variables now default to sensitive.

“Modern controls get deployed, but if legacy tokens or keys aren’t retired, the system quietly favors them,” Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat.

3. Infostealer-to-SaaS-to-supply-chain escalation chains lack detection coverage. Hudson Rock’s reporting reveals a kill chain that crossed four organizational boundaries. No single detection layer covers that chain. Context.ai’s updated bulletin acknowledged that the scope extended beyond what was initially identified during its CrowdStrike-led investigation.

4. Dwell time between vendor detection and customer notification exceeds attacker timelines. Context.ai detected the AWS compromise in March. Vercel disclosed on Sunday. Every CISO should ask their vendors: what is your contractual notification window after detecting unauthorized access that could affect downstream customers?

5. Third-party AI tools are the new shadow IT. Vercel’s bulletin describes Context.ai as “a small, third-party AI tool.” Grip Security’s March 2026 analysis of 23,000 SaaS environments found a 490% year-over-year increase in AI-related attacks. Vercel is the latest enterprise to learn this the hard way.

6. AI-accelerated attackers compress response timelines. Rauch’s assessment of AI acceleration comes from what his IR team observed. CrowdStrike’s 2026 Global Threat Report puts the baseline at a 29-minute average eCrime breakout time, 65% faster than 2024.

Security director action plan

Attack Surface

What Failed

Recommended Action

Owner

OAuth governance

Context.ai held broad “Allow All” Workspace permissions. No approval workflow intercepted.

Inventory every AI tool OAuth grant org-wide. Revoke scopes exceeding least privilege. Check both Vercel IOCs now.

Identity / IAM

Env var classification

Variables not marked “sensitive” remained accessible. Accessibility became the escalation path.

Default to non-readable. Require a security sign-off to downgrade any variable to accessible.

Platform eng + security

Infostealer-to-supply-chain

Kill chain spanned Lumma Stealer, Context.ai AWS, OAuth tokens, Vercel Workspace, and production environments.

Correlate Infostealer intel feeds against employee domains. Automate credential rotation when creds surface in stealer logs.

Threat intel + SOC

Vendor notification lag

Nearly a month between Context.ai detection and Vercel disclosure.

Require 72-hour notification clauses in all contracts involving OAuth or identity integration.

Third-party risk / legal

Shadow AI adoption

One employee’s unapproved AI tool became the breach vector for hundreds of orgs.

Extend shadow IT discovery to AI agent platforms. Treat unapproved adoption as a security event.

Security ops + procurement

Lateral movement speed

Rauch suspects AI acceleration. Attacker compressed the access-to-escalation window.

Cut detection-to-containment SLAs below 29-minute eCrime average.

SOC + IR team

Run both IoC checks today

Search your Google Workspace admin console (Security > API Controls > Manage Third-Party App Access) for two OAuth App IDs.

The first is 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com, tied to Context.ai’s Office Suite.

The second is 110671459871-f3cq3okebd3jcg1lllmroqejdbka8cqq.apps.googleusercontent.com, tied to Context.ai’s Chrome extension and granting Google Drive read access.

If either touched your environment, you are in the blast radius regardless of what Vercel discloses next.

What this means for security directors

Forget the Vercel brand name for a moment. What happened here is the first major proof case that AI agent OAuth integrations create a breach class that most enterprise security programs cannot detect, scope, or contain. A Roblox cheat download in February led to production infrastructure access in April. Four organizational boundaries, two cloud providers, and one identity perimeter. No zero-day required.

For most enterprises, employees have connected AI tools to corporate Google Workspace, Microsoft 365 or Slack instances with broad OAuth scopes — without security teams knowing. The Vercel breach is the case study for what that exposure looks like when an attacker finds it first.

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#google sheets
#generative AI for data analysis
#Excel alternatives for data analysis
#automated anomaly detection
#natural language processing for spreadsheets
#real-time data collaboration
#rows.com
#financial modeling with spreadsheets
#real-time collaboration
#data analysis tools
#business intelligence tools
#spreadsheet API integration
#enterprise data management
#self-service analytics tools
#conversational data analysis
#data visualization tools
#enterprise-level spreadsheet solutions
#collaborative spreadsheet tools
#cloud-based spreadsheet applications
#cloud-native spreadsheets