Ever Lost your Soul?
When the AI Company With a “Soul” Meets the Bottom Line
I’ve sat in that chair. Not Dario Amodei’s specifically — mine had worse lumbar support and the coffee was from a Keurig — but the same chair. The one where you’re the security person trying to explain to a room full of excited stakeholders why we need to slow down. Why the thing they want to ship right now needs another review. Why “we’ll fix it later” is how breaches start.
It’s a lonely chair.
So when I read CNN’s reporting that Anthropic — the AI company that literally branded itself as the one with a “soul” — is loosening its core safety commitments in response to competitive pressure, I didn’t feel shock. I felt recognition.
Full disclosure: I’m writing this article using Claude, which is built by Anthropic. That’s the company this article critiques. Take that however you want — I think it makes the perspective more interesting, not less.
$ cat cnn_report_2026-02-25.log
The Article: What Happened
On February 25, 2026, CNN reported that Anthropic is replacing its two-year-old Responsible Scaling Policy — which included hard commitments to pause model training if capabilities outstripped safety controls — with a new “Frontier Safety Roadmap” that the company itself describes as more flexible.
The key changes:
- Hard commitments → “public goals.” Anthropic’s previous policy had binding guardrails. The new framework is explicitly nonbinding. In their own words: “Rather than being hard commitments, these are public goals that we will openly grade our progress towards.”
- The pause clause is gone. The old policy said Anthropic should stop training more powerful models if they couldn’t control them. That provision has been removed. Their argument: responsible developers pausing while irresponsible ones race ahead could “result in a world that is less safe.”
- Industry leadership → self-preservation. Anthropic originally designed its safety policy to encourage a “race to the top” across the industry. They now acknowledge that hasn’t happened, and they’re separating their internal safety plans from their recommendations for others.
This all landed the same week Defense Secretary Pete Hegseth gave Amodei a Friday deadline to roll back the company’s AI safeguards or lose a $200 million Pentagon contract — with a threat to invoke the Defense Production Act and blacklist Anthropic as a supply chain risk.
Anthropic says the policy change and the Pentagon fight are unrelated. The timing, at minimum, is remarkable.
$ grep -r “we’ll fix it later” /var/log/meetings/
I’ve Seen This Movie Before
Here’s what I know from years of helping organizations evaluate their security posture and design appropriate controls: every single compromise on security is justified. Every one. Nobody walks into a meeting and says “let’s be reckless.” They say:
- “Our competitors aren’t doing this, and it’s slowing us down.”
- “We’ll address it in the next phase.”
- “The risk is theoretical. We need to ship.”
- “We can’t afford to lose this contract.”
- “If we don’t move fast, someone less careful will take our place.”
Sound familiar? Here’s Anthropic’s chief science officer Jared Kaplan, almost verbatim: “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”
I’m not quoting that to mock him. I’m quoting it because I’ve heard some version of it from every CTO, VP of Engineering, and product owner I’ve ever worked with. It’s the most natural, most rational, most dangerous sentence in corporate security.
$ diff –color profit.conf security.conf
The Tug-of-War
Security creates friction. That’s not a bug — it’s the whole point. Locks are friction. Code reviews are friction. Compliance is friction. The question is never “should we eliminate friction?” It’s “how much friction can the business tolerate while still remaining safe?”
Every organization I’ve worked with — from 3-person startups with no InfoSec budget to Fortune 500s with entire security divisions — faces this tension. The conversation always boils down to the same equation:
Speed × Innovation × Profit vs. Safety × Stability × Trust
Startups feel it because they’re burning cash and need to ship. Enterprises feel it because there’s a hot product and the executive team wants to iterate fast. Anthropic is feeling it because they’re competing against OpenAI for enterprise customers, the Pentagon is threatening a $200 million contract, and the political climate has shifted against AI regulation.
The pressure is real. I say that with genuine empathy. Being the person (or company) advocating for caution in a room that wants speed is exhausting, thankless, and — in the current AI landscape — potentially existential for the business.
$ tail -f /var/log/closed_door_meetings.log
Behind Closed Doors
I feel for the security people at Anthropic. I genuinely do. I can’t imagine the conversations happening inside that building right now — but I don’t have to imagine the shape of them. I’ve been there.
If you’ve spent any time in InfoSec leadership, you know the patterns. You’ve seen executives treat compliance as a checkbox to be purchased rather than a standard to be met. You’ve watched security presentations get shut down mid-slide because the findings were “too negative” — and then watched the team get restructured when leadership decided they wanted someone more “aligned with the business.” You’ve felt the squeeze when an audit starts going sideways and the message from above is unmistakable: make it work.
These aren’t hypotheticals. Ask any security professional with a decade in the field and they’ll have their own versions. The details change. The pattern doesn’t.
My point is: the pressure is immense. And I get frustrated with folks on the outside who say that if they were ever in a similar situation, they’d simply quit and find a better job. That sounds great in a LinkedIn post. In reality, if you walked out every time you were pressured to compromise on something you believed in, you’d run out of places to walk to. It’s not that simple.
The pressure doesn’t come from one direction. It comes from the parent company, the investors, the board — from names you only hear when things get serious. You may work for a security-minded, reasonable boss. But when the company’s survival is on the line or huge sums of money are at stake, the gloves come off. Everything you learned in your certifications and training gets tested against business realities, and business realities don’t grade on a curve.
Conversations behind closed doors get very direct. If you don’t understand the stakes — don’t worry — someone will lay them out for you.
That’s not true everywhere. I have many more good stories than bad. But the bad ones leave marks.
I can’t imagine the conversations happening right now at Anthropic — or at whatever government agency is pushing the buttons. Rest assured, somebody in that building is taking a beating, and I feel for them. When you’re the security person at a company whose entire identity is built around safety — watching that identity get renegotiated in real time — the loneliness of that chair hits different.
$ cat /etc/anthropic/founding_principles.conf
Why This One Matters More
But here’s where my empathy has limits.
Anthropic didn’t just happen to be a safety-focused company. They were founded on it. The Amodei siblings left OpenAI specifically because they were concerned about AI safety. They built their entire brand, their investor pitch, their recruiting pipeline, and their public identity around being the responsible alternative.
When a startup skimps on security because they don’t have the budget, that’s survival economics. When a Fortune 500 cuts corners on a product launch, that’s misaligned incentives. When the company that defined itself by its safety commitments downgrades those commitments from “hard” to “aspirational” — that’s a signal to the entire industry that safety is negotiable. That the “soul” has a price tag.
$ nmap -sV pentagon-contract.gov –script=risk-assessment
The Pentagon Angle: When the Customer is the Government
The Pentagon confrontation adds another dimension that security professionals know well: when your biggest customer demands something that conflicts with your security posture.
Anthropic drew two red lines with the Pentagon: no AI-controlled weapons (because AI isn’t reliable enough) and no mass domestic surveillance of American citizens (because no laws or regulations exist to govern it). These are defensible positions. AI researchers applauded them. Even OpenAI CEO Sam Altman publicly backed Anthropic’s stance.
But the government’s response — threatening blacklisting, invoking the Defense Production Act — is the ultimate “comply or lose the contract” pressure. Security professionals have a term for this: it’s a risk acceptance decision made under duress. And when the entity applying the duress controls $200 million and your ability to do future business with the federal government, “unrelated” policy changes start looking awfully convenient.
$ cat lessons_learned.md
What Security Professionals Can Learn From This
Whether you work in AI, cloud infrastructure, healthcare, or any other industry, the Anthropic situation is a case study in a pattern you will absolutely encounter:
- Commitments made in calm waters are tested in storms. Writing a security policy when business is good and competition is manageable is easy. Keeping it when the market is on fire and your biggest customer is threatening to walk? That’s when you find out what the policy was actually worth.
- “Nonbinding” is a red flag. If your organization’s security framework is described as “flexible,” “aspirational,” or “goals we’ll grade ourselves on” — you don’t have a security framework. You have a marketing document.
- The “race to the bottom” argument is seductive and wrong. Every industry has actors who cut corners. Pointing at them to justify lowering your own standards is how entire industries end up with systemic risk. Ask the financial sector circa 2008.
- Separate the internal from the external pressure. Anthropic is dealing with competitive pressure (internal business decision) and government coercion (external political pressure) simultaneously. These require different responses. Conflating them — even unconsciously — leads to worse outcomes on both fronts.
- Transparency is not the same as accountability. Anthropic says its new policy is “the strongest to date on public accountability and transparency.” They’ve committed to publishing detailed reports. That’s good. But publishing a report about how you fell short of your goals is not the same as having enforceable commitments that prevent you from falling short. A dashcam doesn’t stop the accident.
$ exit
The Uncomfortable Truth
I’m going to be honest about something: I don’t have a clean answer here.
If Anthropic holds firm on every safety commitment and their competitors build less safe AI that captures the market — the world might genuinely end up worse. That’s not a hypothetical concern. It’s the core dilemma of unilateral disarmament in any competitive landscape.
But if the company that was supposed to prove you can be both safe and competitive gives up on the “safe” part when it gets hard? That proves the opposite. It tells every startup, every enterprise, every government contractor that safety is a nice-to-have — something you wave around when it’s cheap and convenient, and quietly shelve when the stakes get real.
The tug-of-war between profit and security will never be resolved. It’s structural. What we can control is how honestly we acknowledge the tension, how rigorously we maintain our commitments, and how loudly we call it out — with empathy, but without equivocation — when the rope starts slipping.
As security professionals, it’s our job. Even when it’s thankless. Even when the chair is uncomfortable. Even when the other side has $200 million and a deadline.

One Comment