Your AI Needs a Bot Account
Your AI Needs a Bot Account
Series: Article 2 of AI as a Teammate, Not a Tool. Companion to Automate or Fall Behind.
The page rings. Production is on fire. You open git blame on the line that broke it. The author column says you.
You don’t remember writing it.
You scroll up. You scroll down. The commit message looks like something you’d write. The code style is yours. The PR was self-approved at a reasonable hour. Everything checks out — except you have no memory of writing it, because you didn’t.
Claude wrote it. Or Gemini did. Or your n8n workflow with the LLM step did. They were running under your account, with your SSH key, on a branch you authorized. The audit log is telling the truth as it understands it. You signed for the action.
Welcome to your performance review.
$ cat author-column.txt
The Bug Isn’t in the Code. It’s in the Author Column.
Code review isn’t a process. It’s a calibration.
When a senior engineer’s PR lands in your queue, you skim. You trust their judgment. You spot-check. Done in five minutes. When a contractor’s PR lands, you read every line. You check the test coverage. You ask why. Done in forty-five minutes.
That difference isn’t bias. It’s the system working. Reviewers calibrate to author, because reviewer attention is finite and authors aren’t equally reliable. The whole reason code review works is because the author column tells you something.
Now the AI ships a PR. The author column says you. Your reviewer applies your calibration to your-name code — except you didn’t write it. They skim what should have been read. They trust what should have been verified. The PR lands. Six weeks later, the bug surfaces.
That’s not Claude’s bug. That’s your audit trail lying to your team.
Rule: Code review is calibrated to author. AI signed as you breaks the calibration.
The author column is load-bearing. Don’t let AI corrupt it.
$ cat audit-log-readers.txt
You’re Not the Only One Reading That Author Column
Here’s the part nobody talks about: the audit log doesn’t lie to just you.
It lies to four different people, and three of them aren’t on your team.
- Your coworker reviewing the PR. They calibrate to you, because they don’t have a “this is AI” signal. They’re now applying skim-trust to AI output. That’s a bug they didn’t choose.
- Your manager looking at productivity. The dashboard says you shipped 30% more this quarter. Half of that was AI. The number is fiction, but the comp conversation isn’t.
- Your auditor — the one who shows up for SOC 2, ISO 27001, or whatever your industry’s flavor of “show me what touched production” — needs to know which actions were human and which were automated. If your answer is “I’d have to grep through and remember,” that’s a finding.
- Future you, six months from now, debugging a regression. You’re staring at a commit signed by you that you have no memory of. You can’t tell if you wrote it groggy on a Tuesday or if Claude wrote it at 2 AM while you slept. You make the wrong call about whether to trust it.
I’d bet on this being a recurring theme through your career. The audit log lies the same way the calibration breaks — quietly, until the day it matters.
You don’t need a war story to take this seriously. You need to look at the math.
Rule: If you can’t tell what your AI did versus what you did, nobody downstream of you can either.
Your coworker, your manager, your auditor, and future-you are all flying blind. Together.
$ cat already-in-the-wild.txt
You’re Not Inventing This. You’re Catching Up.
If this feels like overkill, look at your dependencies right now.
dependabot[bot]. renovate[bot]. github-actions[bot]. They’ve been opening PRs in every modern repo for years. They have their own GitHub accounts. Their commits show up with a little robot icon. The author column tells the truth — that PR was opened by a bot, calibrate accordingly. Reviewers know to glance at the Renovate diff and merge it. They know to read the human PR.
GitHub even ships a “Verified” badge that distinguishes commits signed by GitHub Apps from commits signed by humans. The platform has been pushing this distinction down the stack for half a decade. It’s not a new idea. It’s table stakes.
And here’s the one that should make you sit up: Anthropic ships a Co-Authored-By: Claude trailer on commits made by Claude Code. Out of the box. Default behavior. They could have made it silent. They chose not to. Why?
Because they understand the calibration problem better than anyone. They build the model. They know what AI gets right, what AI gets wrong, and how often. They want the author column to tell the truth on every commit Claude touches, because the alternative is reviewers misjudging Claude’s output and Anthropic eating the consequences.
If the company shipping the AI thinks attribution matters this much, you should probably listen.
Rule: The big AI vendors already solved this for themselves. The pattern is sitting in your dependency tree right now.
You’re not inventing the wheel. You’re putting it on your car.
$ tree identity-surfaces/
What Giving AI an Identity Actually Looks Like
Identity is per-surface. The bot account on your Git host doesn’t help if AI logs into your Linux box as you. Pick the surfaces that matter and lock them down. Five common ones:
Git host
Create a dedicated user — ai-bot, assistant-bot, whatever convention works. Give it its own Personal Access Token, scoped narrowly to the repos AI touches. Use deploy keys where you can. AI authenticates as the bot. Commits show up under the bot. PRs show up under the bot.
Concrete picture: imagine the repo behind a project you actually run. Add a collaborator account named, say, project-bot. Route every AI-authored PR through that account — model assistant pushes to a feature branch as the bot, opens a PR as the bot, you review it as you. Every commit visibly comes from the bot. Reviewer brain calibrates correctly. Total setup time: 10 minutes.
Linux user
If AI runs commands on a server — and if it doesn’t yet, it will — give it a dedicated system user. Not your user. Not root. A scoped account. ai-bot with a tight sudoers rule for only the commands it needs. Its own SSH key.
auth.log becomes useful again. last becomes useful again. When you’re trying to figure out who ran rm -rf /var/log/old/ at 2:47 AM, the answer can be the bot account, not “you logged in via your key but you don’t remember why.” The audit trail goes from useless to load-bearing.
Database
Dedicated DB user. Explicit GRANT statements. Read-only on most things. Write only on the tables AI is supposed to write to. No DROP. No schema changes.
ai_writer user with grants on three tables. Same prompt. Wildly different blast radius.
Give AI its own inbox. ai@yourdomain works. So does bot@. So does assistant@.
Slack / chat
Bot user, scoped to specific channels. Different avatar. Maybe a [bot] suffix on the display name.
Rule: Identity is per-surface. Pick the surfaces that matter to you and lock them down individually.
There is no “one place” to give AI an identity. There are five.
$ man overkill
“Isn’t This Overkill for a Homelab?”
Yes. And I want you to do it anyway.
It’s overkill the way fire extinguishers in a kitchen are overkill — until the day they aren’t, at which point they aren’t overkill anymore, they’re the only thing between you and a much worse evening.
But that’s not actually the main reason. The main reason is from Article 1: you don’t earn the rules in production. You earn them in the homelab. The patterns you build for your three-server lab are the patterns you carry into work, where the stakes are bigger and the budget for screwups is smaller.
If you wait until your boss asks “how do we know what AI did versus what people did?” — the auditor is two weeks away and you’re scrambling. If you’ve been doing it on your homelab for months, the answer is “already solved, here’s the rollout plan.” You look like the engineer who saw it coming. Because you did.
Get the scars at home. Bring the rules to work.
$ cat the-auditor-question.txt
The Auditor’s Question Is Not “Did AI Touch This?”
It’s “show me which actions were AI.”
That’s the framing you should be writing your homelab playbooks against. Not “is this safe,” but “can I, on demand, prove which actions on this system were taken by automation versus a human.” That’s the question that’s about to land in compliance frameworks across the industry.
It’s already in NIST AI RMF. It’s already showing up in SOC 2 Type II reports as auditor questions. EU AI Act. The pattern is the same: regulators figured out that “human in the loop” is meaningless if you can’t prove which actions were the human’s.
The shape of the answer is the same shape you’ve been seeing for two years on every modern repo. Bot account. Distinct identity. Audit trail that tells the truth.
You can build it now, on your own time, in your homelab where the cost of a mistake is an evening. Or you can build it in six months, on company time, while a partner organization waits for an answer they were promised by Friday.
Rule: Compliance is going to ask “what did the AI do?”
Decide today whether your audit log can answer that question.
$ exit
What’s Next
I haven’t told you the whole story yet. Even with identity in place — bot accounts on every surface, audit logs telling the truth, calibration restored — your AI still loses its memory every session. It forgets the rules you taught it. It forgets the project context. It forgets why you made the call you made yesterday. And then, sometimes, it gets compacted mid-task, and the summary it leaves behind is lossy enough that the wrong call gets made.
That’s a different problem. It’s the next article in this series.
A note on what I don’t know yet
I’m building this out as I go. The bot-account pattern is solid — it’s been working in CI/CD-land for a decade. What I’m still feeling out is where the friction lives in practice: when does scoping AI to its own user make a session unworkable, when does it feel like security theater, when does it become the thing you don’t think about because it just runs.
If you’ve put AI on its own accounts at scale, I want to hear what broke.
Don’t let AI sign your name.
Give it a name of its own.
Then read the audit log honestly.
