Don’t worry! It’s Not a Secret!
“Not a Secret”… Until It Was
For over a decade, Google told developers that API keys weren’t secrets. Embed them in your JavaScript. Paste them into your HTML. Ship them in your client-side code. Firebase’s own security checklist had a green checkmark next to “API keys for Firebase services are not secret.” Google Maps documentation literally instructed developers to paste their key directly into a <script> tag.
And they were right. Those keys were project identifiers — billing tokens, not authentication credentials. Locking them down would’ve been over-engineering. Every secret scanner that flagged AIza... patterns was generating noise, and teams rightfully suppressed those findings.
Then Gemini arrived, and the rules changed. Nobody sent the memo.
Full technical deep-dive: This article builds on excellent research by Truffle Security. Their post has proof-of-concept code, Common Crawl scan results, Google’s own exposed keys, and the full disclosure timeline. Read it. I’m going to focus on what this means for your security program and why your scanner might be lying to you.
$ cat /var/log/changelog | grep “breaking”
What Changed
Here’s the short version: Google Cloud uses a single API key format (AIza...) for two fundamentally different purposes — public identification and sensitive authentication. For years, the “public identification” use case was the only game in town. Maps, Firebase, YouTube embeds. These keys were designed for billing, restricted with bypassable HTTP referer allow-listing, and explicitly documented as safe for client-side exposure.
Then Google launched the Gemini API (Generative Language API) and made it available as a service you could enable on existing GCP projects. The moment someone on your team flips that switch, every unrestricted API key in the project silently gains access to sensitive Gemini endpoints. No warning dialog. No confirmation email. No notification to the developer who created the key three years ago and embedded it in the company website’s map widget.
Truffle Security calls this “retroactive privilege expansion,” and that’s exactly what it is. A key that was harmless when it was created — because you followed Google’s own guidance — is now a live credential that can access uploaded files, cached content, and rack up AI compute charges on your account. The key didn’t change. The permissions around it did.
One of Google’s own public-facing websites had a key embedded since at least February 2023 that silently gained Gemini access. If the vendor’s own engineering teams can’t avoid this trap, expecting every developer to just figure it out is not a recipe for success.
$ diff –color before.conf after.conf
The Retroactive Privilege Problem
This isn’t a misconfiguration. That’s the uncomfortable part.
The developer who created that Maps key followed Google’s documentation to the letter. The security team that suppressed the scanner finding was applying the correct guidance at the time. The architect who put the key in client-side JavaScript was doing what Google explicitly told them to do.
What makes this a privilege escalation rather than a misconfiguration is the sequence of events:
This creates a class of vulnerability that most security models aren’t built to handle. We think about credentials in binary terms — either something is a secret or it isn’t. We scope our controls accordingly. But what happens when the classification changes after the credential is already deployed in production?
Google’s default doesn’t help either. New API keys default to “Unrestricted” — meaning they’re immediately valid for every enabled API in the project, including Gemini. The UI shows a yellow warning about “unauthorized use,” but the architectural default is wide open.
Google’s response and remediation roadmap
Credit where it’s due — Google didn’t ignore this. After Truffle Security’s initial report was dismissed as “intended behavior,” the team provided concrete evidence from Google’s own infrastructure. Google then reclassified it as a bug (Single-Service Privilege Escalation, READ — Tier 1), expanded their leaked-credential detection pipeline, and started restricting exposed keys from accessing Gemini.
Their published roadmap includes:
- Scoped defaults — New keys created through AI Studio will default to Gemini-only access
- Leaked key blocking — Defaulting to blocking API keys discovered as leaked
- Proactive notification — Communicating proactively when leaked keys are identified
Those are meaningful steps. The open question is whether they’ll retroactively audit existing keys and notify every project owner whose credentials are currently exposed.
$ grep -rn “suppress\|ignore\|allowlist” .scannerconfig
The Blind Spot in Your Scanner
Here’s where this gets personal for security teams.
If you run any kind of secret scanning — TruffleHog, GitLeaks, GitHub Advanced Security, whatever — there’s a good chance your team made a deliberate decision at some point to suppress or deprioritize Google API key findings. And that decision was correct at the time. Google said these weren’t secrets. The keys were designed to be public. Flagging them was noise.
But those suppression rules are still in place. And now they’re creating blind spots.
Think about where those decisions live:
Where Your Suppressions Hide
- Scanner exclusion files —
.trufflehog.yml,.gitleaksrc,secretlintconfigs withAIzapatterns on the ignore list - SAST/DAST suppressions — Findings marked as “false positive” or “accepted risk” that nobody re-evaluates when the threat model changes
- Code review culture — Your team internalized “Google API keys are fine in client code.” That mental model is now wrong, but it’s still influencing every review where someone sees
AIza...and doesn’t flag it - CI/CD pipelines — Pre-commit hooks and pipeline scanners tuned to reduce noise by excluding known “safe” patterns
- Internal documentation — Runbooks and wiki pages that say “Google API keys are not secrets, don’t worry about them in public repos”
The fix isn’t just rotating keys (though you should do that too). The fix is going back and undoing every decision you made based on the old guidance. Unfilter the findings. Re-scan. And this time, verify which of those keys actually have Gemini access — because a regex match on AIza... alone won’t tell you that.
This Is Where TruffleHog Earns Its Keep
TruffleHog doesn’t just pattern-match. It actively verifies whether a discovered key is live and whether it has Gemini API access. The --only-verified flag means you’re not wading through hundreds of dead keys. You get a short list of the ones that are actually dangerous right now.
That verification step is the difference between “you have 400 findings to triage” and “you have 12 keys that grant live Gemini access right now.”
If you’re not already using TruffleHog, this is a good reason to start. It’s open source, actively maintained, and the team behind it (Truffle Security) is doing exactly the kind of research that makes the rest of us safer. They found this vulnerability, pushed Google past an initial dismissal, and published both the research and the free tooling to help people fix it. That’s the security community working the way it should.
$ nmap –script=crystal-ball future-vulns/
This Won’t Be the Last Time
Here’s the broader pattern that should keep security teams up at night: every major platform is bolting AI capabilities onto existing infrastructure, and the credential implications are an afterthought.
Google is the case study because they’re the biggest and they moved first, but this pattern — “enable a powerful new service that inherits permissions from existing credentials” — is not unique to them. It’s an industry-wide race.
Think about the trajectory:
- AWS launched Bedrock and keeps expanding which services can be accessed through existing IAM roles and keys. If a developer created an IAM key for S3 access and someone later attaches a Bedrock policy to that role, the same privilege escalation pattern applies — except with AWS’s permission model, it’s even harder to audit retroactively.
- Azure is integrating AI services across its platform. An Azure AD service principal created for a simple web app could gain access to Azure OpenAI endpoints if the right API permissions get added.
- Every SaaS platform adding “AI features” to existing products is making the same trade-off: bolt the new capability onto the existing auth model for speed, worry about security boundaries later.
The common thread is urgency. Everyone is racing to ship AI features, and nobody wants to slow down to redesign their authentication architecture. So they reuse what’s already there — existing API keys, existing service accounts, existing OAuth scopes — and hope the existing permission model holds.
The question isn’t whether this will happen on other platforms. It’s when — and whether you’ll notice before an attacker does.
$ sudo ./audit.sh –all-projects
What To Do Right Now
Whether or not you think you’re affected, here’s the play:
Go to the GCP console, navigate to APIs & Services > Enabled APIs & Services, and look for “Generative Language API.” Do this for every project in your organization. If it’s not enabled, you’re not affected by this specific issue.
// but keep reading — the broader pattern still applies
Navigate to APIs & Services > Credentials. Check each API key’s configuration. You’re looking for two things:
- Keys with a warning icon (set to unrestricted)
- Keys that explicitly list the Generative Language API in their allowed services
// either configuration allows the key to access Gemini
This is the critical step. If a key with Gemini access is embedded in client-side JavaScript, checked into a public repository, or otherwise exposed on the internet, you have a problem. Start with your oldest keys first — those are the most likely to have been deployed publicly under the old guidance.
// if you find an exposed key, rotate it. today.
Don’t just check your GCP console. Scan everywhere:
# Scan your local codebase
trufflehog filesystem /path/to/your/code --only-verified
# Scan a Git repo (includes full commit history)
trufflehog git https://github.com/your-org/your-repo --only-verified
# Scan your CI/CD pipeline configs
trufflehog filesystem /path/to/ci-configs --only-verified
The --only-verified flag is critical. TruffleHog will hit the API endpoint and confirm whether the key has live Gemini access. You’re not triaging regex matches — you’re looking at confirmed exposures.
This is the one most people will skip, and it’s the most important. Go back to your secret scanning configuration and remove any suppression rules that were based on the premise that Google API keys aren’t secrets. Those rules were correct when they were written. They’re wrong now.
// re-scan everything, then re-evaluate which findings actually need suppression based on the current threat model — not the one from 2019
$ exit
The Rules Changed. Your Move.
The security industry has a vocabulary for things that go wrong. Misconfigurations. Vulnerabilities. Zero-days. But we don’t have a great word for what happened here — a credential that was correctly classified as non-sensitive, correctly deployed in a public context, and then had its sensitivity silently upgraded by the platform vendor without notice.
Truffle Security did the work that needed to happen. They found the vulnerability, proved it at scale, demonstrated it against Google’s own infrastructure, pushed past an initial dismissal, and published both the research and the free tooling to fix it. That’s how the security community is supposed to work.
The bigger lesson isn’t about Google. It’s about what happens when every platform decides to bolt AI onto existing infrastructure at top speed. The credentials that were safe yesterday might not be safe tomorrow. Your suppression rules, your code review instincts, your “that’s not a secret” mental models — all of it needs a second look.
The rules changed. Go check your keys.
