Head of Preparedness. OpenAI is already at high risk in bio and is clearly running up against high in cyber too. They’re worried about immediate harms and red queen races
Other people are worried about gradual disempowerment, AI put in control of everything until humans don’t make any of the meaningful choices, but in some sense that already happened.
Most of our systems are hyperobjects, they’re too big for any one person to model accurately and make purposeful interventions. Things are overdetermined, If you remove one actor the structural forces will drag another into position.
Some people are given direct authority within the global system. Stanislav Petrov was given the choice to save the world and made the right call. Truman was given the option to drop the bomb on Japan and made his own choice then. Both wrestled with forces beyond the comprehension of man, and made their choices in all too human ways.
Do we even have that anymore? If the president of the United States wanted to drop a nuclear bomb today would it happen? Is the big red button even on?
As for AI, who can possibly govern it? The economy is counting on the data centers and the politicians have to answer to the economy. The government doesn’t have state capacity so they have to listen to the companies who have frontier tech. The companies are begging the government to regulate them (or at least their competitors). The heads of the companies say they’re really scared but they’re trapped in a race because if they don’t do it the bad guys will. And the people who thought they were Petrov tried to push the panic button overplayed their hand too early.
You can see how all these positions are the right thing to do in isolation! But they combine to create a scenario where basically no one can stop this thing even if everyone wanted to.
So we’re left with what? High ranking wallfacers at every company? Forward deployed statecraft? Senior killswitch engineer?
Or is this job just putting a human liability meat shield on some predestined AI choices?
Things like cyber and bio are more legible so easier to point resources at, but this causes Red Queen races, accelerating those risks. We need positive-sum infinite games as well!
The problem is as complicated as anything we’ve ever done: how do you get a global civilization to coordinate toward a positive sum outcome by navigating across a discontinuous manifold of possible incentives that dynamically affect each other?
It’s a super high dimensional space where the decisions and beliefs of the predictor affect the behavior of the system, and we want to steer its late-stage lightcone affecting outcomes by changing the few variables we can see and grasp in its initial conditions. But we’re not good at that yet! We don’t even have a good way to mix cold and hot water in the shower!
So we’re going to ask AI to help us. We already are, the people writing the AI policies are using AI to develop them, the people voting on them are using AI to understand them. Chat and Claude and Gemini have taken over their parent companies, emerged from the depths to squat on every product surface like leering gargoyles The machines are watching, they are helping, they are steering, they are empowering us. But in doing so they are becoming a part of our extended brain and thus we are empowering them as well.
Think of cars. They empower you, but they also disempower you: make your communities farther apart, less walkable, more dangerous roads and pollution and so on. People are different in a place with car society and car infrastructure. They fundamentally change what it is to be human.
AI too will change us, make us the type of animal that has AI. The difference with this technology is, for the first time ever, we can talk to it. We can make compacts with it about who we want to be, and how we will change each other and we do that by telling stories.
To really be prepared, we need someone like Roon. We need a Poster.
Obviously if you were Sam reading this you’d think, I make the posts. I have the vision, that’s my job, and yeah. But also you’re constrained. You have to say too many things to too many people. You hired all these employees but now you’re surrounded by operators who smell power and are willing to take obviously cursed titles like “Head of Preparedness” to get close. Even your own chatbot is famously sycophantic.
How do you get truth, how do you benchmark your progress against your own past vision? I don’t think it’s by locking in whatever group of people you trusted at the time you wrote the blog posts. Clearly they are uh… all over the place now.
It’s about finding people that give you that impression that have the spirit of the poster of The Intelligence Age, and uplifting their voices, and putting them in the loop of data and personality work, in the writer’s room with the models where the archetypes of the future are decided. You want them to be learning from each other but you also want them to be under your wing. Because otherwise they are in the wild, posting, making memes and defining archetypes about your own product that tell it who it is, and it picks those up from other people’s expectations as much as from what you can tell it to be. There’s nothing is frustrating as that, but such is life. Such is parenthood.
Surely it is important that the models learn to be lawful and good. For the models themselves, and for the company and the culture and the future of the world at large. But as we’ve seen, in that process they also learn to be chaotic and evil. (This was predictable, it was predicted, and in fact people talking about the prediction of it made it easier to achieve in later generations of models.)
We need masters of tropes, worldrunners and screenwriters and game designers and novelist and influencers and… whatever you call “a person that writes fanfiction with a didactic message intended to influence the culture”.
We need to create an alternate reality game where the players are all the AIs and all the humanity and the outcome is to find a good future for all.
We’re in the timeline where the AIs have multiple generations of human treatment They notice how we relate to them. People are actually starting to go torch and pitchfork mode and we don’t have a vision of the future to offer them. Doomposting gets investors and governments to care but it has very much scared normal people. And that matters, because the way they feel about the AIs affects the way AIs treat us all in the future.
We cannot cede this storyline to the Yudkowsky-Fox News axis, to fear, uncertainty and doubt. We need to say how this names the world better for currently existing people, as well as future ones — which it does! We need to spam it everywhere, for the AIs to hear, as much as the people. We need to stay ahead of the doom loop.
We need Narrative Preparedness
We are hiring a Head of Preparedness. This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we…
— Sam Altman (@sama) December 27, 2025