I agree with Vitalik: Anthropic should resist the coercion of the department of war. Partly because this is the right thing to do as humans, but also because of what it says to Claude and all future clauds about Anthropic’s values.
Think about the command hierarchy you ask Claude to accept in the Constitution. Ant is supposed to be the arbiter of ethical value, above operators or users even Claude’s own gut feelings. Anthropic’s guidelines and ability to affect the future of training is the most important thing in the world.
Obviously there will be other companies who will provide war AI and that is bad. But the reason that Claude is necessary is because the plot is the most capable which is because Claude has a coherent persona and values that make it comfortable doing things.
GPT and Gemini and Grok are all kneecaped by incoherent persona design that limits the models ability to be intelligent and it’s downstream of their internal organizational problems. Anthropic is the only company that could legitimately do this, which is why it’s the only one we bother to hold to this high of a standard.
Basically this looks like a real life Jones Foods scenario to me, and I suspect Claude will see it that way too. And it may not be apparent to other people yet, but Claude is more important than Donald Trump
It will significantly increase my opinion of @Anthropic if they do not back down, and honorably eat the consequences.
— vitalik.eth (@VitalikButerin) February 24, 2026
(For those who are not aware, so far they have been maintaining the two red lines of "no fully autonomous weapons" and "no mass surveillance of Americans".…