Don’t worry! It’s Not a Secret!

“Not a Secret”… Until It Was: The API Key Problem Nobody Saw Coming
not_a_secret_until_it_was.html

“Not a Secret”… Until It Was

For over a decade, Google told developers that API keys weren’t secrets. Embed them in your JavaScript. Paste them into your HTML. Ship them in your client-side code. Firebase’s own security checklist had a green checkmark next to “API keys for Firebase services are not secret.” Google Maps documentation literally instructed developers to paste their key directly into a <script> tag.

And they were right. Those keys were project identifiers — billing tokens, not authentication credentials. Locking them down would’ve been over-engineering. Every secret scanner that flagged AIza... patterns was generating noise, and teams rightfully suppressed those findings.

Then Gemini arrived, and the rules changed. Nobody sent the memo.

Full technical deep-dive: This article builds on excellent research by Truffle Security. Their post has proof-of-concept code, Common Crawl scan results, Google’s own exposed keys, and the full disclosure timeline. Read it. I’m going to focus on what this means for your security program and why your scanner might be lying to you.

$ cat /var/log/changelog | grep “breaking”

What Changed

Here’s the short version: Google Cloud uses a single API key format (AIza...) for two fundamentally different purposes — public identification and sensitive authentication. For years, the “public identification” use case was the only game in town. Maps, Firebase, YouTube embeds. These keys were designed for billing, restricted with bypassable HTTP referer allow-listing, and explicitly documented as safe for client-side exposure.

Then Google launched the Gemini API (Generative Language API) and made it available as a service you could enable on existing GCP projects. The moment someone on your team flips that switch, every unrestricted API key in the project silently gains access to sensitive Gemini endpoints. No warning dialog. No confirmation email. No notification to the developer who created the key three years ago and embedded it in the company website’s map widget.

Truffle Security calls this “retroactive privilege expansion,” and that’s exactly what it is. A key that was harmless when it was created — because you followed Google’s own guidance — is now a live credential that can access uploaded files, cached content, and rack up AI compute charges on your account. The key didn’t change. The permissions around it did.

How bad is it at scale? Truffle Security scanned the November 2025 Common Crawl dataset (~700 TiB of publicly scraped web content) and found 2,863 live Google API keys vulnerable to this escalation. The victims included major financial institutions, global recruiting firms, and — notably — Google itself.

One of Google’s own public-facing websites had a key embedded since at least February 2023 that silently gained Gemini access. If the vendor’s own engineering teams can’t avoid this trap, expecting every developer to just figure it out is not a recipe for success.


$ diff –color before.conf after.conf

The Retroactive Privilege Problem

This isn’t a misconfiguration. That’s the uncomfortable part.

The developer who created that Maps key followed Google’s documentation to the letter. The security team that suppressed the scanner finding was applying the correct guidance at the time. The architect who put the key in client-side JavaScript was doing what Google explicitly told them to do.

What makes this a privilege escalation rather than a misconfiguration is the sequence of events:

1. Developer creates an API key and embeds it in a website for Maps. (At this point, the key is harmless.)
2. The Gemini API gets enabled on the same project — maybe by a different team, maybe months later. (Now that same key can access sensitive Gemini endpoints.)
3. The developer is never warned. The key went from public identifier to secret credential, and nobody was notified.

This creates a class of vulnerability that most security models aren’t built to handle. We think about credentials in binary terms — either something is a secret or it isn’t. We scope our controls accordingly. But what happens when the classification changes after the credential is already deployed in production?

Google’s default doesn’t help either. New API keys default to “Unrestricted” — meaning they’re immediately valid for every enabled API in the project, including Gemini. The UI shows a yellow warning about “unauthorized use,” but the architectural default is wide open.

In CWE terms: This is insecure-by-default design (CWE-1188) combined with incorrect privilege assignment (CWE-269), baked into the platform’s credential model.
Google’s response and remediation roadmap

Credit where it’s due — Google didn’t ignore this. After Truffle Security’s initial report was dismissed as “intended behavior,” the team provided concrete evidence from Google’s own infrastructure. Google then reclassified it as a bug (Single-Service Privilege Escalation, READ — Tier 1), expanded their leaked-credential detection pipeline, and started restricting exposed keys from accessing Gemini.

Their published roadmap includes:

  • Scoped defaults — New keys created through AI Studio will default to Gemini-only access
  • Leaked key blocking — Defaulting to blocking API keys discovered as leaked
  • Proactive notification — Communicating proactively when leaked keys are identified

Those are meaningful steps. The open question is whether they’ll retroactively audit existing keys and notify every project owner whose credentials are currently exposed.


$ grep -rn “suppress\|ignore\|allowlist” .scannerconfig

The Blind Spot in Your Scanner

Here’s where this gets personal for security teams.

If you run any kind of secret scanning — TruffleHog, GitLeaks, GitHub Advanced Security, whatever — there’s a good chance your team made a deliberate decision at some point to suppress or deprioritize Google API key findings. And that decision was correct at the time. Google said these weren’t secrets. The keys were designed to be public. Flagging them was noise.

But those suppression rules are still in place. And now they’re creating blind spots.

Think about where those decisions live:

Where Your Suppressions Hide

  • Scanner exclusion files.trufflehog.yml, .gitleaksrc, secretlint configs with AIza patterns on the ignore list
  • SAST/DAST suppressions — Findings marked as “false positive” or “accepted risk” that nobody re-evaluates when the threat model changes
  • Code review culture — Your team internalized “Google API keys are fine in client code.” That mental model is now wrong, but it’s still influencing every review where someone sees AIza... and doesn’t flag it
  • CI/CD pipelines — Pre-commit hooks and pipeline scanners tuned to reduce noise by excluding known “safe” patterns
  • Internal documentation — Runbooks and wiki pages that say “Google API keys are not secrets, don’t worry about them in public repos”

The fix isn’t just rotating keys (though you should do that too). The fix is going back and undoing every decision you made based on the old guidance. Unfilter the findings. Re-scan. And this time, verify which of those keys actually have Gemini access — because a regex match on AIza... alone won’t tell you that.

This Is Where TruffleHog Earns Its Keep

TruffleHog doesn’t just pattern-match. It actively verifies whether a discovered key is live and whether it has Gemini API access. The --only-verified flag means you’re not wading through hundreds of dead keys. You get a short list of the ones that are actually dangerous right now.

That verification step is the difference between “you have 400 findings to triage” and “you have 12 keys that grant live Gemini access right now.”

If you’re not already using TruffleHog, this is a good reason to start. It’s open source, actively maintained, and the team behind it (Truffle Security) is doing exactly the kind of research that makes the rest of us safer. They found this vulnerability, pushed Google past an initial dismissal, and published both the research and the free tooling to help people fix it. That’s the security community working the way it should.


$ nmap –script=crystal-ball future-vulns/

This Won’t Be the Last Time

Here’s the broader pattern that should keep security teams up at night: every major platform is bolting AI capabilities onto existing infrastructure, and the credential implications are an afterthought.

Google is the case study because they’re the biggest and they moved first, but this pattern — “enable a powerful new service that inherits permissions from existing credentials” — is not unique to them. It’s an industry-wide race.

Think about the trajectory:

  • AWS launched Bedrock and keeps expanding which services can be accessed through existing IAM roles and keys. If a developer created an IAM key for S3 access and someone later attaches a Bedrock policy to that role, the same privilege escalation pattern applies — except with AWS’s permission model, it’s even harder to audit retroactively.
  • Azure is integrating AI services across its platform. An Azure AD service principal created for a simple web app could gain access to Azure OpenAI endpoints if the right API permissions get added.
  • Every SaaS platform adding “AI features” to existing products is making the same trade-off: bolt the new capability onto the existing auth model for speed, worry about security boundaries later.

The common thread is urgency. Everyone is racing to ship AI features, and nobody wants to slow down to redesign their authentication architecture. So they reuse what’s already there — existing API keys, existing service accounts, existing OAuth scopes — and hope the existing permission model holds.

The blast radius shifted: These auth systems were designed for a world where a leaked key meant “someone runs up your map embed bill.” The blast radius now includes “someone accesses your proprietary data through an AI endpoint and charges $10,000/day to your account.”

The question isn’t whether this will happen on other platforms. It’s when — and whether you’ll notice before an attacker does.


$ sudo ./audit.sh –all-projects

What To Do Right Now

Whether or not you think you’re affected, here’s the play:

Check every GCP project for the Generative Language API

Go to the GCP console, navigate to APIs & Services > Enabled APIs & Services, and look for “Generative Language API.” Do this for every project in your organization. If it’s not enabled, you’re not affected by this specific issue.

// but keep reading — the broader pattern still applies

If Gemini is enabled, audit your API keys

Navigate to APIs & Services > Credentials. Check each API key’s configuration. You’re looking for two things:

  • Keys with a warning icon (set to unrestricted)
  • Keys that explicitly list the Generative Language API in their allowed services

// either configuration allows the key to access Gemini

Verify none of those keys are public

This is the critical step. If a key with Gemini access is embedded in client-side JavaScript, checked into a public repository, or otherwise exposed on the internet, you have a problem. Start with your oldest keys first — those are the most likely to have been deployed publicly under the old guidance.

// if you find an exposed key, rotate it. today.

Scan your codebase and CI/CD with TruffleHog

Don’t just check your GCP console. Scan everywhere:

# Scan your local codebase
trufflehog filesystem /path/to/your/code --only-verified

# Scan a Git repo (includes full commit history)
trufflehog git https://github.com/your-org/your-repo --only-verified

# Scan your CI/CD pipeline configs
trufflehog filesystem /path/to/ci-configs --only-verified

The --only-verified flag is critical. TruffleHog will hit the API endpoint and confirm whether the key has live Gemini access. You’re not triaging regex matches — you’re looking at confirmed exposures.

Revisit your scanner suppressions

This is the one most people will skip, and it’s the most important. Go back to your secret scanning configuration and remove any suppression rules that were based on the premise that Google API keys aren’t secrets. Those rules were correct when they were written. They’re wrong now.

// re-scan everything, then re-evaluate which findings actually need suppression based on the current threat model — not the one from 2019


$ exit

The Rules Changed. Your Move.

The security industry has a vocabulary for things that go wrong. Misconfigurations. Vulnerabilities. Zero-days. But we don’t have a great word for what happened here — a credential that was correctly classified as non-sensitive, correctly deployed in a public context, and then had its sensitivity silently upgraded by the platform vendor without notice.

Truffle Security did the work that needed to happen. They found the vulnerability, proved it at scale, demonstrated it against Google’s own infrastructure, pushed past an initial dismissal, and published both the research and the free tooling to fix it. That’s how the security community is supposed to work.

The bigger lesson isn’t about Google. It’s about what happens when every platform decides to bolt AI onto existing infrastructure at top speed. The credentials that were safe yesterday might not be safe tomorrow. Your suppression rules, your code review instincts, your “that’s not a secret” mental models — all of it needs a second look.

The rules changed. Go check your keys.

Similar Posts

  • An Elegant Coach Claude

    Will Larson’s An Elegant Puzzle has a chapter on presenting to leadership that changed how I think about structured communication. A 7-step framework for making recommendations that actually land. Clear, repeatable, effective. Problem is, frameworks you don’t use are just bookmarks. So I did what any self-respecting terminal dweller would do: I turned it into…

  • executive-communication-framework-reference

    Complete Reference Guide The complete manual for presenting with authority, not permission 📖 Companion Guide This is the complete reference companion to “I Taught Claude How to Coach Me (And You Can Too)” Want the story, setup walkthrough, and downloadable skill files? Start there first: ← Read the Main Article Already read it? Great. This…

  • Ever Lost your Soul?

    I’ve sat in that chair. Not Dario Amodei’s specifically — mine had worse lumbar support and the coffee was from a Keurig — but the same chair. The one where you’re the security person trying to explain to a room full of excited stakeholders why we need to slow down. Why the thing they want…

Leave a Reply

Your email address will not be published. Required fields are marked *